SQL is a language that is used to manage relational databases. It provides a facility for easily declaring and using variables. We can use variables for storing temporary values in memory for performing calculations. In SQL variables can be used in the same way we use in any other programming language.
SQL Declare Variable
In SQL we use DECLARE statements for declaring variables. Let’s see its syntax:
Syntax:
DECLARE @variable_name data_type;
The above syntax is self explanatory so no need to explain.
Example:
DECLARE @marks int;
Here I have declared a variable with the name marks and type int.
SQL Assign Variable
I hope you got the idea about declaring variables now let’s have a look at how to assign some value.
Syntax:
SET @variable_name = value;
The SET statement is used for assigning some value to a variable in SQL.
Example:
SET @marks = 80;
As we declared a variable and assigned a value to it, now let’s take a look at how to use it with a SELECT statement.
SELECT * FROM Student WHERE total_marks > @marks;
Here we are fetching data of students from the table Student whose total_marks are more than marks i.e. more than 80.
Please note that the variable declared inside a stored procedure can be used in that procedure only.
So we can say DECLARE and SET statements are used to declare and assign a variable in SQL. Below I have mentioned a video that can help you understand the concept easily.
In today’s era Network or Internet is used to send and receive data worldwide. As we know the network is a complex structure consisting of lots of computers and networking devices.
The system which seeks data is known as the Client and the system which provides the data is known as Host.
Also, we know that the TCP/IP and OSI models are the models which are used most. Both use layer architecture.
Before transmission data moves between these layered architectures. Encapsulation and decapsulation are two processes used in this layered architecture. It’s like Boxing and Unboxing a real-world object for sent over courier.
We can’t send any data directly on the internet. There are lots of things to do before transmission, like establish host to host communication, provides source and destination IP addresses etc. Fortunately, we as a user do not need to do any chores. The Network model does the credits. All we need to do is to just select the data and press “send or upload”.
Encapsulation
It is a process in which additional information is added with the original data packet which is required for accurate and safe transmission of data over the Internet.
Here encapsulation could be described as a method of designing modular communication protocols. These protocols are used to create Protocol Data Unit (PDU). These PDU are being attached with the original data packet, which is forwarded over the internet. These PDU are created by the layers of the network model such as TCP/IP and OSI. Each layer has its own unique PDU and performs the task as per the layer’s functioning.
When we select data for transmission, the layers start their work step by step. The selected data move downwards in these layers. At every layer some new kind of data (Not the user data, the system data like IP addresses, frame size etc.) also known as PDU, is added with the original data packet (Selected by user). So, this whole process in known as Encapsulation.
Let’s take an example.
Here in this example, the user uses TCP/IP model.
Here in above mentioned diagram, the yellow-coloured box is user data, which he/she wants to send over the network. When the user press sends or upload from his device, the data is forwarded to the Application layer.
This is step 1. Here Data itself is a PDU in this stage. The application layer shows the data to the user, received from the network in a format, which is easy to understand for the user. Every facility like email service, internet surfing l, online shopping, etc is falling into this category.
In step 2, the data packet is forwarded to the transport layer, where the data is encapsuled by the UDP (User Datagram Protocol) data. As you can see in the above diagram. Also, the UDP header is attached for identification of UDP data packet. Here this PDU is known as a segment.
In 3rd step, the segment is sent to the Internet layer. Here the segment is encapsulated by the IP data and the IP header is attached. This is the PDU of this layer.
This PDU is known as Packet. Each and every device or system in the network has an individual and unique IP address. IP header is responsible for containing source and destination IP addresses. These details are used to search and find out the location of the client and host in the network.
In 4th step this packet is sent to the Network Access Layer. Here the received data packet is encapsulated by the MAC address. Now this PDU is called Frame. Each frame has Frame Header and Footer. MAC Address or Media Access Control actually represents a real-world hardware or physical address of the Network Adapter or system like the computer of a host. This address or hardware ID is allotted by the Manufacturer of the NIC (Network Interface Card).
Once the 4th step is completed the frame is sent over the network. So, as we can see in the above diagram the original data is encapsulated by much additional information in each stage. This data or information addition is known as Encapsulation.
Each layer encapsule the original data with its own PDU, and this whole PDU is again encapsulated in a new PDU at the next stage. The process continued till the last layer arrived.
Decapsulation
This is the reverse process of encapsulation. In this, the additional information is plucked out from the original data. Once the frame reached at its destination or the host system, the reverse process is started. This is down to top approach, where the frame is received by the network layer first and then after decapsulation, it is sent to the upper layer such as Internet layer.
Similar to Encapsulation each layer has its own Decapsulation format. For example, take the above-mentioned figure but this time it will be in reverse order such as given below:
At 1st step the frame is sent from the client, and received by the network layer of the host computer. This PDU or frame is removed, and only this PDU Packet is forwarded to the Internet layer.
In step 2 the packet is decapsulated and the Packet PDU is removed and now it is the segment PDU. This is forwarded to the Transport layer.
In step 3 the segment is again decapsulated and the PDU segment is removed and finally, the original data is forwarded to the Application layer.
In step 4 the original data is shown to the user.
Both processes are related and necessary for smooth and secure data transmission over the network.
Difference between Encapsulation and Decapsulation
Encapsulation
Decapsulation
It follows Top to Down approach; the data moves in sequence between layers but goes down layer from the recent upper one.
It follows Down to TOP approach; the data moves in sequence between layers but goes upward layer from the recent down one.
In each layer the additional information is added with the original data.
In each layer the additional information is removed till the original data is decapsulated.
At each step the size is increased.
At each stage the size is decreased.
The encapsulation technique and Decapsulation technique are used to secure the original data and provide 100% secure and successful data transmission.
Computers are designed for specific work and are developed in different sizes and shapes depending upon the requirement. There are many different types of computers, some are designed for personal use and some are made for business and scientific purposes. Here we will explore the different types of computers.
Personal Computers (PCs)
Personal computers are the most common type of computer. It is also known as PCs. These computers are designed for personal use, such as for home, school, office work, and more. PCs come in two forms one is the desktop and the second one is the laptop. Desktop computers are designed for regular use at a single location on a table due to their size and power supply. While laptops are portable and easy to use and carry. Personal computers can be used for a wide range of tasks, including browsing the web, word processing, gaming, and multimedia.
Workstation
Workstation computers are designed for technical or scientific applications. These computers are high-performance computers used by engineers, architects, graphic designers, scientists, etc. It can be used in animation, video editing, and data analysis. These computers are expensive and made for complex tasks purposes. It is designed for a single user or professional use. Workstations are mainly used by single users, they are basically connected to LAN (local area network). They are able to handle huge amounts of data and can process complex calculations quickly.
Mainframe computers are used by large organizations such as banks, the telecom sector, and government agencies. These computers are designed to support hundreds or thousands of users at the same time. Mainframes are known for trustworthiness, security, performance, ability to protect data, and more. It can process a huge amount of data quickly. There is very less chance of error during processing. If any bugs or error occurs, it can fix them quickly. These mainframe computers are expensive. It is used in defense departments to share huge amounts of sensitive data or information with other branches. Mainframe computers have also been used in the field of healthcare, education, the retail sector, etc.
Supercomputers
The supercomputer is designed to perform large and complex calculations in seconds. The first supercomputer was developed in 1976 by Roger Cray. These are the fastest, biggest, and most powerful computers. They are mainly used for scientific and engineering applications. They have large amounts of memory and processing power and can perform trillions of calculations in seconds. It can decrypt your password to increase protection for security purposes. Supercomputers are used in the field of weather forecasting, and molecular dynamic simulation.
Minicomputers
Minicomputers were developed in the mid-1960s. These are midsize computers with multiprocessing. They are smaller than mainframe computers and larger than microcomputers. It is mainly used in institutes and departments for work such as billing, accounting, and inventory management. These computers are lightweight and less expensive than mainframe computers.
Embedded Computers
A computer system that is integrated into another device to perform a specific function is called an embedded computer. It is designed for only limited operations. For example, the type of embedded computer in a car will not be the same for a washing machine or other items. Embedded computers are found in daily items such as smartphones, toys, cars, and more. It is used to control and monitor the device’s functions. Embedded computers are used for small devices such as fitness trackers to large device aircraft control systems.
Tablets
Modern technology had developed pocket-friendly computers such as tablets and smartphones. It has a sensitive touch screen and is very lightweight. These are designed to be portable and very lightweight and used as an alternative to laptops for some tasks. They are also used for gaming, watching videos, studying, browsing the web, etc. Now tablets have powerful graphics and processing capabilities.
Conclusion
The different types of computers are designed to complete the specific needs and requirements of organizations. There are personal computers for homes and individuals. On the second hand, there is also a supercomputer for scientific computers. There are different types of computers available today. By understanding the type of computers, it will be easy for us to decide which computer will be suited to our needs.
The Internet is not that much easy as we think. It is a vast and massive bunch of hardware devices interconnected all over the internet. These devices are assembled in groups and then these tiny groups are connected with each other, and then these massive groups are connected with each other and the cycle goes on. Each device has its own specialty and role. If in any case any one of the devices started malfunctioning, it will affect the whole network. Sometimes these malfunctionings are self occurred and sometimes the protocols force these devices to start malfunctioning.
Network flapping is a kind of malfunction that is generated due to an extra advanced mechanism of traffic control in the network.
Let’s discuss it in detail:
First, understand the word Flapping. Its simple meaning is unexpected or randomly switching between two choices. Like if we wanted to choose an option between two options such as A and B, then the system shows that Option A is better and then suddenly says that option B is better, and when we select B then again its shows that A is better. This event can irritate us and we can’t select any one option.
Network Flapping
In a network, routers are used, whose main responsibility is to find and allot the best route between client and host for broadcasting. These routers are mainly divided into two types Static and Dynamic.
Statice routers always carry just one route for all transmission instead of Dynamic routers which provides the choice of more than one better path.
Network flapping occurs in these Dynamic routers. In this flapping situation, the dynamic router switches between two different routes for a similar host for transmission.
As we can see in the above diagram the router has two routes to reach a similar Host. According to the network protocol in normal situations, the Router should use only one route for transmission like the below diagram:
Or vice versa
In special cases like heavy traffic, the dynamic router can use the alternate or both routes for transmission as shown in figure 1. Every time the dynamic router chooses a route it updates the routing table about the transmission and the route provided. This is a little bit complex and time-consuming process but it ensures 100% successful transmission asap.
So, what’s the issue here?
When the dynamic routers alternate between available routes even in a normal situation or not required condition, then it creates havoc. Every time the flapping occurs the dynamic router updates the routing table. So, when it alternates randomly during a broadcast of a single data item, then extra pressure puts on the network due to unnecessary routing traffic. And if the dynamic router continuously repeats the thing, then the overall network may be disturbed.
These types of dynamic routers are known as flapping routers.
Let’s take an example.
Suppose you are standing at a crossroads during a red signal. You need to go straight and you have a clean road, but the traffic controller forced you to use the alternate way saying it is the best route. As responsible citizens, you obey the traffic controller and started to move in that direction but suddenly the traffic controller stops you and says to use the old route saying it is the best route. Now it’s a little irritating but you obey the rules and started to move in the old direction, but suddenly the traffic controller stops you again and says to use the alternate way saying it is the best. Now just think what will happen.
You get frustrated because the controller actually stops you to use both ways. You will not reach your destination and your time is wasted.
And at the same time, the traffic behind you is increased due to you standing between the road as an obstacle for the followers.
So, what we can conclude:
We can’t go anywhere.
Our time is wasted.
The traffic is created and increased continuously.
The overall crossroads is full of traffic and soon the connected crossroads will be overcrowded too.
Just one unwanted situation created this havoc.
And you know what, what is the best part when we need to clear this all traffic, this will take a lot of time and effort. So, the best solution to this problem is to prevent this situation.
In a similar way when a flapping dynamic router started unwanted flapping, the actual data packets are stuck in between the transmission and unable to reach their destination. Due to this, the group of data packets is stuck in a traffic-like situation. This kind of blockage in the network can impact the whole network.
Route Summarization
For the Prevention of network flapping the Route Summarization technique is used. In Route Summarization, all the routes connected to a Router are arranged in such a systematic way that it does not need any alternative route. All the routers connected are arranged in a similar way.
Here figure 4 shows a network without route summarization, and figure 5 shows the route summarization.
In route summarization, all available routes with different IDs are masked with a common ID. As we can see in figure 4 at router B, we have three different routes with different IDs such as B.1, B.2, and B.3, but in figure 5 all these separate routes are masked as Route B. At the broadcasting time, no matter which route (Such as B.1, B.2, B.3) is used between Router B and the server, the routing table always shows Route B only. So, if router B doing flapping, still it shows only path B. The internal flapping will not affect routers A and C.
This is called Route Summarization.
Network flapping is a good mechanism for traffic control, but sometimes the router is unnecessarily configured to load-balance, thus they started unwanted flapping. Route summarization can be helpful in this.
Computers are necessary devices that we use on the daily basis. Here we will discuss the overview, definition of the generation of computers, historical context, importance, objective, and more. The evolution of computer systems is usually discussed in terms of the evolution of different generations. The generation of computers differentiated on the basis of the advancement of computers.
Let’s discuss the development of computer technology in different generations.
First Generation
The 1st generation computers were developed in the late 1940s to mid-1950s. Their primary electronic components were vacuum tubes or thermionic valve machines. The first-generation computers were big, slow, and consume a lot of power compared to modern computers. They were also costly and needed specialized technicians to maintain them. Their input was based on punch cards and paper tape and they worked on the binary-coded concept.
ENIAC (Electronic Numerical Integrator and Computer) was the most famous first-generation computer which was used for military calculation and scientific research.
UNIVAC (Universal Automatic Computer) was the first commercial computer, which was used for data processing applications.
Second Generation
2nd generation computers were developed in the year 1956 to 1963. In the second generation of computers, they used vacuum tubes as the basic components for computation and used mechanical switches by replacing the wires of first-generation computers. They were developed by using the technology of transistors. The second-generation computers were smaller and had less time taken than first-generation computers. They also had magnetic core memory which increased the storage capacity.
The concept of machine-independent programming languages COBOL and FORTRAN made it easier for people with no technical background to use computers. Second-generation computers were used in large corporations, government organizations, scientific simulations, etc.
The second-generation computers represented a great leap forward in the development of computing technology and paved the way for the development of third-generation computers.
Examples: IBM 1401, IBM 7094, and IBM 7090, etc.
Third Generation
3rd generation computers were developed from the mid-1960 to 1970. They have marked a valuable shift from previous generations in the terms of performance, technology design, and more. The use of integrated circuits (ICs) replaced the bulky and unreliable transistors that were used in second-generation computers. Third-generation computers were much smaller, more reliable, and more affordable than earlier generations. They were also easier for commercial use.
It also improved the devices such as keyboards and monitors which helped people to interact with computers easily. The third generation introduced the minicomputers which were less in size and cost than mainframe computers. It made computers more accessible to small businesses and individuals.
Fourth Generation
The fourth-generation computers were developed from 1971 to 1980 and marked a major turning point in the development of computers. Microprocessors and VLSI (very large-scale integration) were used as the main electronic components in fourth-generation computers. A microprocessor is an integrated circuit that contained all the components of a computer CPU on a single chip. It made computers more affordable, smaller, and faster.
Fourth-generation computers introduced the operating system which allowed multiple applications to run on a single computer at the same time. In terms of storage, it used floppy disks and hard drives and expanded the amount of data that could be stored. The computer became an essential tool in many aspects of life such as businesses, homes, schools, and more. The innovation of this time continues to shape the computer industry nowadays and has made computers an essential tool in our modern world.
Fifth Generation
The fifth-generation computers were developed between 1980 till now. The parallel processing method and ULSI (Ultra large scale integration) are used as the main electronic component in fifth-generation computers. The fifth-generation computers are more powerful, have huge storage capacity, and have higher speed. Their sizes are also smaller and portable. The computers of the fifth generation are categorized on the basis of hardware and software both.
There are advanced technologies of fifth-generation computers such as quantum computation, artificial intelligence, parallel processing, and more.