Wednesday, July 11, 2012

Introduction To IP Addressing And Networking


Networks and networking have grown exponentially over the last 15years; they have evolved at light speed just to keep up with huge increases in basic critical user needs such as sharing data and printers, as well as more advanced demands such as video conferencing.

TYPES OF NETWORKS

LOCAL AREA NETWORK (LAN)

A LAN (Local Area Network) is a group of computers and network devices connected together, usually within the same building. A Local Area Network (LAN) is a high-speed communication system designed to link computers and other data processing devices together within a small geographical area, such as a workgroup, department, or building. Local Area Networks implement shared access technology. This means that all the devices attached to the LAN share a single communications medium, usually a coaxial, twisted pair or fibre optic cable.

METROPOLITAN AREA NETWORK (MAN)

Metropolitan area networks or MANs are large computer networks usually spanning a city or a town. They typically use wireless infrastructure or optical fibre connections to link their sites.

The IEEE 802-2001 standard describes a MAN as being: "A MAN is optimized for a larger geographical area than is a LAN, ranging from several blocks of buildings to entire cities. MANs can also depend on communications channels of moderate to high data rates. A MAN might be owned and operated by a single organization, but it usually will be used by many individuals and organizations. MANs might also be owned and operated as public utilities. They will often provide means for internetworking of local networks. Metropolitan area networks can span up to 50km."

WIDE AREA NETWORK (WAN)

Wide Area Network (WAN) is a computer network that covers a broad area. A WAN in compares to a MAN, is not restricted to a geographical location, although it might be restricted to a geographical locations, it might also be confined within the bounds of a state or country. A WAN connects several LANs, and may be limited to an enterprise (a corporation or organization) or accessible to the public.

The technology is high speed and relatively expensive. The INTERNET is an example of a worldwide public WAN.

NETWORKING DEVICES

ROUTERS

Routers are used to connect networks together and route packets of data from one network to another. Routers, by default break up a broadcast domain, which is the set of all devices on a network segment that hear all broadcasts sent on that segment.

Routers also break up collision domains. This is an Ethernet term used to describe a network scenario where one particular device sends a packet on a network segment, forcing every other device on that segment to pay attention to it. At the same time, a different device tries to transmit, leading to a collision, after which both devices must retransmit one at a time.

Routers run on the layer 3 of the OSI (Open System Interconnection) reference model.

SWITCHES

Switches are used for network segmentation based on the MAC addresses. Switches look at the incoming frame's hardware addresses before deciding to either forward the frame or drop it.

Switches break up collision domains but the hosts on the switch are still members of one big broadcast domain.

HUB

A hub is really a multiple port repeater. A repeater receives a digital signal and re-amplifies or regenerates that signal, and then forwards the digital signal out all active ports without looking at any data. An active hub does the same thing. This means all devices plugged into a hub are in the same collision domain as well as in the same broadcast domain, which means that devices share the same bandwidth. Hubs operate at the physical layer of the OSI model.

IP ADDRESSING

An IP address is a numeric identifier assigned to each machine on an IP network. It designates the specific location of a device on the network. An IP address is a software address and designed to allow host on one network to communicate with a host on a different network regardless of the type of LANs the hosts are participating in.

IP TERMINOLOGIES

Bit: A bit is one digit, either a 1 or a 0.

Byte: A byte is 7 or 8 bits, depending on whether parity is used.

Octet: An octet, made up of 8 bits is just an ordinary 8 bit binary number. In most cases byte and octet are completely interchangeable.

Network address: This is the designation used in routing to send packets to a remote network. For example 10.0.0.0, 172.16.0.0, and 192.168.10.0 are network addresses.

Broadcast address: The address used by applications and hosts to send information to all nodes on a network is called the broadcast address. Examples include 255.255.255.255 which is all networks, all nodes; 172.16.255.255, which is all subnets and hosts on network 172.16.0.0.

HEIRARCHICAL IP ADDRESSING SCHEME

An IP address consists of 32 bits of information (IPV4). IPV6, a new version of IP consists of 128 bits of information. The 32 bits IP is divided into four sections referred to as octet or bytes each containing 1 byte (8bits).

An IP address is depicted using any of these 3 methods.

Dotted decimal, as in 172.16.30.56

Binary, as in 10101100.00010000.00011110.00111000

Hexadecimal, as in AC.10.1E.38

All this examples represent the same IP address. But the most commonly used is the dotted decimal. The Windows Registry stores a machine's IP address in hex.

The 32 bit IP address is a structured or hierarchical address, as opposed to a flat non hierarchical address. Although either type of addressing scheme could have been used, hierarchical addressing was chosen for a good reason. The advantage of this scheme is that it can handle a large number of addresses, namely 4.3 billion (a 32 bit address space with two possible values for each position that is either 1 or 0 gives 237, or 4,294,967,296).

The disadvantage of the flat addressing scheme relates to routing. If every address were unique, all routers on the internet would need to store the address of each and every machine on the internet. This would make efficient routing impossible.

NETWORK ADDRESS RANGE

The network address uniquely identifies each network. Every machine on the same network shares that network address as part of its IP address. In the IP address of 172.16.30.56, 172.16 is the network address.

The node address is assigned to and uniquely identifies each machine on a network. This number can also be referred to as host address. In 172.16.30.56, 30.56 is the node address. Class A network is used when a small number of networks possessing a very large number of nodes are needed. Class C network is used when numerous networks with a small number of node is needed.

CLASS A ADDRESSES

The first bit of the first byte in a class A network address must always be off or 0. This means a class A address must be between 0 and 127, inclusive.

0xxxxxxx.hhhhhhhh.hhhhhhhh.hhhhhhhh

If we turn the other 7 bits all off and then turn them all on, we'll find the class A range of network addresses.

00000000 = 0

01111111 = 127

Class A format is network.node.node.node, so for example in the IP address 49.22.102.70, the 49 is the network address and 22.102.70 is the node address. Every machine on this particular network would have the distinctive network address of 49.

CLASS B ADDRESSES

The first bit of the first byte must always be turned on, but the second bit must always be turned off.

01xxxxxx.xxxxxxxx.hhhhhhhh.hhhhhhhh

If we can turn the first bit on and the second bit off and if the other 6 bits all off and then all on, we'll find the class B range of network addresses.

10000000 = 128

10111111 = 191

Class B format is network.network.node.node, so far in the IP address 132.163.40.57, the 132.163 is the network address and 40.57 is the node address.

CLASS C ADDRESSES

The first and second bit of the first byte must always be turned on, but the third bit can never be on.

110xxxxx.xxxxxxxx.xxxxxxxx.hhhhhhhh

If we turn the first and second bit on and the third bit off and then all other 5 bits all off and all on, we'll find the class C range of network address.

11000000 = 192

11011111 = 223

Class C format is network.network.network.node, for example in the IP address 195.166.231.75, the 195.166.231 is the network address and 75 is the node address.

CLASS D AND CLASS E ADDRESSES

The address between 224 and 255 are reserved for class D and E networks. Class D (224-239) is used for multicast addresses and class E (240-255) for scientific purposes.

PRIVATE IP ADDRESSES

Private IP addresses are those that can be used on a private network, but they're not routable through the internet. This is designed for the purpose of creating a measure of well-needed security, but it also conveniently saves valuable IP address space. If every host on every network had to have real routable IP addresses, we would have run out of IP addresses to hand out years ago.

Class A 10.0.0.0 through 10.255.255.255

Class B 172.16.0.0 through 172.31.255.255

Class C 192.168.0.0 through 192.168.255.255

TROUBLESHOOTING IP ADDRESSING

Here are the troubleshooting steps in resolving a problem on an IP network.

1. Open a DOS window and ping 127.0.0.1. This is the diagnostic or loopback address, and if you get a successful ping, your IP stack is considered to be initialized. If it fails, then you have an IP stack failure and need to reinstall TCP/IP on the host.

2. From the DOS window, ping the IP addresses of the local host. If that's successful, then your Network Interface Card (NIC) card is functioning. If it fails, then there is a problem with the NIC card. This doesn't mean that a cable is plugged into the NIC, only that the IP protocol stack on the host can communicate to the NIC.


3. From the DOS window, ping the default gateway. If the ping works, it means that the NIC is plugged into the network and can communicate on the local network. If it fails, then you have a local physical network problem that could be happening anywhere from the NIC to the gateway.
4. If steps 1 through 3 were successful, try to ping the remote server. If that works then you have IP communication between then local host and the remote server, you also know that the remote physical network is working.
5. If the user still can't communicate with the server after steps 1 through 4 were successful, then there's probably a resolution problem and there is need to check the Domain Name Server (DNS) settings.
NETWORK ADDRESS TRANSLATION
Network Address Translation (NAT) is used mainly to translate private inside addresses on a network to a global outside address. The main idea is to conserve internet global address space, but it also increases network security by hiding internal IP addresses from external networks.
TABLE 3: NAT ADVANTAGES AND DISADVANTAGES
ADVANTAGES
Conserves legally registered addresses.
Reduces address overlap occurrence.
Increases flexibility when connecting to internet.
Eliminates address renumbering as network changes.
Translation introduces switching path delays
DISADVANTAGES
Loss of end-to-end traceability
Certain applications will not function with NAT enabled.
TYPES OF NAT
Static NAT: This type of NAT is designed to allow one-to-one mapping between local and global addresses. Static NAT requires that there is one real internet IP address for every host on your network.
Dynamic NAT: This version gives one the ability to map an unregistered IP address to a registered IP address from out of a pool of registered IP addresses.
Overloading: This is also known as Port Address Translation (PAT). It is the most popular type of NAT configuration. Overloading is a form of dynamic NAT that maps multiple unregistered IP address to a single registered IP address by using different ports. With overloading thousands of users can connect to the internet using only one real global IP address.
NAT TERMINOLOGIES 
Local addresses: Name of local hosts before translation.

Global addresses: Name of addresses after translation.
Inside local: Name of inside source address before translation.
Outside local: Name of destination host before translation.
Inside global: Name of inside hosts after translation
Outside global: Name of outside destination host after translation.
LAYER2 SWITCHING
Layer2 switching is the process of using the hardware address of devices on a LAN to segment a network. The term layer2 switching is used because switches operate on the data-link layer which is the second layer of the OSI reference model.
Layer2 switching is considered hardware-based bridging because it uses specialized hardware called an application-specific integrated circuit (ASIC). ASICs can run up to gigabit speeds with very low latency rates.
Switches read each frame as it passes through the network, the layer2 device then puts the source hardware address in a filter table and keeps track of which port the frame was received on. The information (logged in the switch's filter table) is what helps the machine determine the location of a specific sending device. After a filter table is built on the layer2 device, it will only forward frames to the segment where the destination hardware is located. If the destination device is on the same segment as the frame, the layer2 device will block the frame from going to any other segments. If the destination is on a different segment, the frame can only be transmitted to that segment. This is called TRANSPARENT BRIDGING.
When a switch interface receives a frame with a destination hardware address that isn't found in the device filter table, it will forward the frame to all connected segments. If the unknown device that was sent the frame replies to this forwarding action, the switch updates its filter table regarding that device's location.
ADVANTAGES OF LAYER2 SWITCHING
The biggest benefit of LAN switching over hub-centred implementations is that each device on every segment plugged into a switch can transmit silmatenously whereas hubs only allow one device per network segment to communicate at a time.
Switches are faster than routers because they don't take time looking at the Network layer header information. Instead, they look at the frame's hardware address before deciding to either forward the frame or drop it.
Switches create private dedicated collision domains and provide independent bandwidth on each port unlike hubs. The figure below shows five hosts connected to a switch, all running 10Mbps half-duplex to the server. Unlike the hub, each host has 10Mbps dedicated communication to the server.
LIMITATIONS OF LAYER2 SWITCHING
Switched networks break up collision domains but the network is still one large broadcast domain. This does not only limits your network's size and growth potential, but can also reduce its overall performance.
FUNCTIONS OF LAYER2 SWITCHING
There are three distinct functions of layer2 switching, these are
Address learning.
Forward/filter decision
Loop avoidance.
ADDRESS LEARNING
When a switch is first powered on, the MAC forward/filter table is empty. When a device transmits and an interface receives the frame, the switch places the frame source address in the MAC forward/filter table, allowing it to remember which interface the sending device is located on. The switch then has no choice but to flood the network with this frame out of every port except the source port because it has no idea where the destination device is actually located.
If a device answers the flooded frame and sends a frame back, then the switch will take source address from that frame and place that MAC address in its database as well, associating this address with the interface that received the frame. Since the switch now has both of the relevant MAC addresses in its filtering table, the two devices can now make a point to point connection. The switch doesn't need to flood the frame as it did the first time.
If there is no communication to a particular address within a certain amount of time, the switch will flush the entry from the database to keep it as current as possible.
FORWARD/FILTER DECISIONS
When a frame arrives at a switch interface, the destination hardware address is compared to the forward/filter MAC database. If the destination hardware address is known and listed in the database, the frame is sent out only the correct exit interface.
The switch doesn't transmit the frame out any interface except for the destination interface. This preserves bandwidth on the other network segments and is called FRAME FILTERING.
LOOP AVOIDANCE
When two switches are connected together, redundant links between the switches are a good idea because they help prevent complete network failures in the event one link stops working.
Redundant links are extremely helpful but they often cause more problems than they solve, this is because frames can be flooded down all redundant links silmatenously creating network loops.
Switches use a protocol called STP (Spanning Tree Protocol) created by Digital Equipment Corporation (DEC) now Compaq to avoid network loops by shutting down redundant links. With STP running, frames will be forwarded only on the premium STP-picked link.
CONFIGURING THE CISCO 2950 CATALYST SWITCH FAMILY.
The 2950 switch is one of the Cisco Catalyst switch family's high-end model. The 2950 comes in many flavours and run 10Mbps all the way up to 1Gbps switched ports with either twisted-pair or fibre. They can provide basic data, video and voice services.
2950 SWITCH STARTUP
When the 2950 switch is first powered on, it runs through a Power-on-Self-test (POST). At first all port LEDs are green, and if upon completion the post determines that all ports are in good shape, all the LEDs blink and then turn off. But if the POST finds a port that has failed both the system's LED and the port's LEDs turn amber.
However, unlike a router, the switch is actually usable in Fresh-out-of-the-box condition. You can just plug the switch into your network and connect network segment together without any configuration.
To connect to the Cisco switch, use a rolled Ethernet cable to connect a host to a switch console serial communication port. Once you have the correct cable connected from your PC to the Cisco switch, you can start HyperTerminal to create a console connection and configure the device as follows:
1. Open HyperTerminal by clicking on start button and then All programs, then Accessories, then Communication, then click on HyperTerminal. Enter a name for the connection. It is irrelevant what you name it. Then click OK.
2. Choose the communication port either COM1 or COM2, whichever is open on your PC.
3. Now at the port settings. The default values (2400bps and no flow control hardware) will not work, you must set the port settings as shown in the figure below.
Notice that the bit rate is set to 9600 and the flow control is set to none. At this point click OK and press the Enter key, and you should be connected to your Cisco switch console port.
Here's the 2950 switch's initial output:
--- System Configuration Dialog ---
Would you like to enter the initial configuration dialog? [Yes/no]: no
Press RETURN to get started!
00:04:53: %LINK-5-CHANGED: Interface Vlan1, changed state to administratively down
00:04:54: %LINEPROTO-5-UPDOWN: Line protocol on Interface Vlan1, changed state to down 
Switch>

THE CONFIGURATION
The switch> prompt is called the user exec mode and it's mostly used to view statistics. You can only view and change configuration of a Cisco switch in privileged exec mode which you get into with the enable command.
Switch>
Switch> enable
Switch#
Switch# disable
Switch>
The global configuration mode can be entered from the privileged mode by using the configure terminal command or config t for short. 
Switch# config t 
Enter the configuration commands, one per line, End with CNTL/Z. 
Switch(config)# hostname zenith 
Zenith(config)#

The hostname command is used in naming the switch. The hostname of a switch is only locally significant but it's still helpful to set a hostname on a switch so that you can identify the switch when connecting to it.
SETTING THE ENABLE MODE PASSWORDS AND LINE PASSWORD.
Zenith> enable
Zenith# config t
Enter the configuration commands, one per line, End with CNTL/Z.
Zenith(config)# enable password bank
Zenith(config)# enable secret middle
The enable password bank command sets the enable password as bank and the enable secret middle command sets the enable secret password as middle. The enable secret password is more secure and it supersedes the enable password if it is set. The enable secret password and the enable password cannot be the same on the 2950 switch.
Zenith(config)# line ?
First line number
console Primary terminal line
vty Virtual terminal
Zenith(config)# line vty ?
First line number
Zenith(config)# line vty 0 15
Zenith(config-line)# login
Zenith(config-line)# password alex
Zenith(config-line)# line con 0
Zenith(config-line)# login
Zenith(config-line)# password malouda
Zenith(config-line)# exit
Zenith(config)# exit
Zenith#
The line vty 0 15, login and password alex commands set the telnet password to alex and the line con 0, login, and password malouda commands sets the console password to malouda.
SETTING IP INFORMATION
You don't have to set any IP configuration on the switch to make it work. You can just plug it in. But there are two reasons we set IP address information on the switch.
To manage the switch via Telnet or other management software.
To configure the switch with different VLANs and other network functions.
Zenith(config)# int vlan 1
Zenith(config-if)# ip address 172.16.10.17 255.255.255.0
Zenith(config-if)# no shutdown
Zenith(config-if)# exit
Zenith(config)# ip default-gateway 172.16.10.1
Zenith(config)#
The IP address is set to 172.16.10.17 and the no shutdown command must be applied to enable the interface.
CONFIGURING INTERFACE DESCRIPTIONS
You can administratively set a name for each interface on the switches with the description command.
Zenith(config)# int fastethernet 0/ ?
FastEthernet Interface number.
Zenith(config)# int fastethernet 0/1
Zenith(config-if)# description Sales LAN
Zenith(config-if)# int f0/12
Zenith(config-if)# description Connection to Mail server
Zenith(config-if)# CNTL/Z 
Zenith#

You can look at the descriptions at any time with either the show interface command or the show running-config command from the global configuration mode.
ERASING AND SAVING THE SWITCH CONFIGURATION 
Zenith# copy running-config startup-config 
Zenith# erase startup-config

The first command copies the configuration into the NVRAM (Non-volatile RAM) while the erase startup-config command erases the switch configuration.
Zenith# erase startup-config
Erasing the nvram filesystem will remove all files! Continue? [confirm] [Enter]
[OK]
Erase of nvram: complete
Zenith#
VIRTUAL LAN (VLAN)
A Virtual LAN (VLAN) is a logical grouping of network users and resources connected to administratively defined ports on a switch. When one create VLANs, one creates smaller broadcast domains within a switched internetwork by assigning different ports on the switch to different subnetworks. A VLAN is treated like its own subnet or broadcast domain, which means that frames broadcast onto the network are only switched between ports logically grouped within the same VLAN. 
By default, no hosts in a specific VLAN can communicate with any other hosts that are members of another VLAN. 
5.1 ADVANTAGES OF VLAN

A group of users needing security can be put into a VLAN so that no user outside the VLAN can communicate with them.
As a logical grouping of users by function, VLANs can be considered independent from their physical or geographical locations.
VLANs can enhance network security.
It can block broadcast storms caused by a faulty NIC (Network Interface Card) card.
VLANs increase the number of broadcast domains while decreasing their sizes.
VLAN MEMBERSHIP
VLANs are usually created by the administrator, who then assigns switch ports to each VLAN. Such a VLAN is called a static VLAN. If the administrator wants to do a little more work up front and assign all the host devices hardware addresses into a database, then the switch can be configured to assign VLANs dynamically whenever a host is plugged into a switch. This is called dynamic VLAN.
STATIC VLANs
Static VLANs are the usual way of creating VLANs, and they're also the most secure. The switch port that you assign a VLAN association to always maintain that association until an administrator manually changes that port assignment.
DYNAMIC VLANs
A dynamic VLAN determines a node's VLAN assignment automatically. Using intelligent management software, you can base assignment on hardware addresses, protocols, or even applications to create dynamic VLANs.
An example is the VLAN Management Policy Server (VMPS) service used to set up a database of MAC addresses that can be used for dynamic addressing of VLANs. A VMPS database maps MAC addresses to VLANs.
FRAME TAGGING
As frames are switched through the network, switches must be able to keep track of all the frames. Frames are handled differently according to the type of link they are traversing. The frame identification method uniquely assigns user defined ID to each frame. This is sometimes referred to as the "VLAN ID".
Each switch that the frame reaches must first identify the VLAN ID from the frame tag, and then it finds out what to do with the frame by looking at the information in the filter table. If the frame reaches a switch that has another trunked link, the frame will be forwarded out the trunk-link port.
Once the frame reaches an exit to an access link matching the frame's VLAN ID, the switch removes the VLAN identifier. This is so the destination device can receive the frame without having to understand their VLAN identification.
There are two different types of links in a switched environment, they are: 
Access links: This type of link is only part of one VLAN. Any device attached to an access link is unaware of a VLAN membership; the device just assumes its part of a broadcast domain. Access link devices cannot communicate with devices outside their VLAN unless the packet is routed. 
Trunk links: Trunk links can carry multiple VLANs. A trunk link is a 100 or 1000Mbps point to point link between two switches, between a switch and server. These carry the traffic of multiple VLANs from 1 to 1005 at a time. Trunking allows you to make a single port part of multiple VLANS at the same time. It also allows VLANs to span across multiple switches.

VLAN IDENTIFICATION METHODS
There are basically two ways of frame tagging.
Inter-Switch Link (ISL)
IEEE 802.1Q
The main purpose of ISL and 802.1Q frame tagging methods is to provide interswitch VLAN communication.
Inter-switch Link (ISL) Protocol: This is proprietary to Cisco switches, and it is used for fast Ethernet and gigabit Ethernet links only. ISL routing can be used on a switch port, router interfaces and server interface cards to trunk a server.
IEEE 802.1Q: Created by the IEEE as a standard method of frame tagging, it isn't Cisco proprietary so if you're trunking between a Cisco switched link and a different brand of switch; you have to use 802.1Q for the trunk link to work.
VLAN TRUNKING PROTOCOL (VTP)
This protocol was created by Cisco but it is not proprietary. The basic goals of VLAN Trunking protocol (VTP) are to manage all configured VLANs across a switched internetwork and to maintain consistency through the network. VTP allows an administrator to add, delete and rename VLANs on a switch, information that is then propagated to all other switches in the VTP domain.
Before one can get VTP to manage VLANs across the network, one has to create a VTP server. All switches sharing the same VLAN information must be in the same VTP domain.
One can use a VTP domain if there is more than one switch connected in a network, but if all the switches are in only one VLAN, there is no need to use VTP. VTP information is set between switches via trunk port.
This report exposes one to various aspects of computer networking, IP routing and IP switching and how to manage a network from an office network to larger networks. Areas covered in this report includes IP addressing, Network Address Translation (NAT), IP switching and Virtual Private Network (VPN).

Job opportunities with reference to networking


Networking is the act of linking computers together to a single server, for the purpose of information sharing and effective use of resources like the printer, etc.

As we earlier explained in one of our articles entitled "HOW TO BECOME A COMPUTER NETWORKER" certain skills are required of the individual that is aspiring to take up a networking career, like having a full knowledge of what the computer is and how it works. As you develop your interest to become a networker, there are various career opportunities in the field for you to explore.

For one to take up a career in computer networking, s\he must be conversant with the computer hardware, operating systems, microprocessors, peripheral devices, computer architecture, assembly and disassembly, installing various software, configuring PCs, preventive maintenance and troubleshooting.

In computer networking, several types of positions exist. Some of which include;

Network Administrator
Network (Systems) Engineer
Network (Service) Technician
Network Programmer/Analyst
Network/Information Systems Manager
Network Security Officer

A Brief description of the various career opportunities in computer networking, this description will serve as a guide for you to know the area you can fit in.

The Network Administrator: The network administrator is responsible for configuring and managing the LANs and WANs. He is also in charge of analyzing, installing and configuring of company networks. He monitors daily network performance, troubleshoots system and maintains network security. Other secondary activities of the network administrator include; assisting customers with operating systems and network adapters, configuring routers, switches, and firewalls, and evaluating third-party tools.


Network Technician:                                                                                                       

His major responsibilities focus more on the setup, troubleshooting, and repair of specific hardware and software products.


Network Programmer/Analysts:                                                                      
The Network programmer/analysts generally develop software programs or scripts that aid in network analysis, such as diagnostics or monitoring utilities. They also specialize in evaluating third-party products and integrating new software technologies into an existing network environment or to build a new environment.

Network/Information Systems Manager:                                                  
He generally supervises the work of the administrators, engineers, technicians, and/or programmers. He also focuses on longer-range planning and strategy considerations.

Service Technicians
He is in charge of visiting customer sites to perform field upgrades and support functions.

Network Security
As more and more organizations move their offline transactions online and vast quantities of vital and sensitive data travels through networks, there is need to develop a concrete e-security systems to safeguard the networks and databases of those organizations. And the network security officer is in charge of that job. To effectively succeed in this career, you must have a better knowledge of system programming, administration, security configuration, firewalls, advanced TCP/IP, security fundamentals, security implementation, router security and attack routes. Computer security specialists plan, coordinate, and maintain an organization's information security. These workers educate users about computer security, install security software, monitor networks for security breaches, respond to cyber attacks, and, in some cases, gather data and evidence to be used in prosecuting cyber crime. The responsibilities of computer security specialists have increased in recent years as cyber attacks have become more sophisticated.

Network architects or network engineers: are the designers of computer networks. They set up, test, and evaluate systems such as local area networks (LANs), wide area networks (WANs), the Internet, intranets, and other data communications systems. Systems are configured in many ways and can range from a connection between two offices in the same building to globally distributed networks, voice mail, and e-mail systems of a multinational organization. Network architects and engineers perform network modeling, analysis, and planning, which often require both hardware and software solutions. For example, setting up a network may involve the installation of several pieces of hardware, such as routers and hubs, wireless adaptors, and cables, as well as the installation and configuration of software, such as network drivers. These workers may also research related products and make necessary hardware and software recommendations, as well as address information security issues.


Artificial neural network


Artificial neural network
An artificial neural network (ANN), usually called "neural network" (NN), is a mathematical model or computational model that tries to simulate the structure and/or functional aspects of biological neural networks. It consists of an interconnected group of artificial neurons and processes information using a connectionist approach to computation. In most cases an ANN is an adaptive system that changes its structure based on external or internal information that flows through the network during the learning phase. Neural networks are non-linear statistical data modeling tools. They can be used to model complex relationships between inputs and outputs or to find patterns in data.

Background

There is no precise agreed-upon definition among researchers as to what a neural network is, but most would agree that it involves a network of simple processing elements (neurons), which can exhibit complex global behavior, determined by the connections between the processing elements and element parameters. The original inspiration for the technique came from examination of the central nervous system and the neurons (and their axons, dendrites and synapses) which constitute one of its most significant information processing elements (see Neuroscience). In a neural network model, simple nodes (called variously "neurons", "neurodes", "PEs" ("processing elements") or "units") are connected together to form a network of nodes — hence the term "neural network." While a neural network does not have to be adaptive per se, its practical use comes with algorithms designed to alter the strength (weights) of the connections in the network to produce a desired signal flow.

These networks are also similar to the biological neural networks in the sense that functions are performed collectively and in parallel by the units, rather than there being a clear delineation of subtasks to which various units are assigned (see also connectionism). Currently, the term Artificial Neural Network (ANN) tends to refer mostly to neural network models employed in statistics, cognitive psychology and artificial intelligence. Neural network models designed with emulation of the central nervous system (CNS) in mind are a subject of theoretical neuroscience (computational neuroscience).

In modern software implementations of artificial neural networks the approach inspired by biology has for the most part been abandoned for a more practical approach based on statistics and signal processing. In some of these systems, neural networks or parts of neural networks (such as artificial neurons) are used as components in larger systems that combine both adaptive and non-adaptive elements. While the more general approach of such adaptive systems is more suitable for real-world problem solving, it has far less to do with the traditional artificial intelligence connectionist models. What they do have in common, however, is the principle of non-linear, distributed, parallel and local processing and adaptation.

Models

Neural network models in artificial intelligence are usually referred to as artificial neural networks (ANNs); these are essentially simple mathematical models defining a function . Each type of ANN model corresponds to a class of such functions.

Employing artificial neural networks

Perhaps the greatest advantage of ANNs is their ability to be used as an arbitrary function approximation mechanism which 'learns' from observed data. However, using them is not so straightforward and a relatively good understanding of the underlying theory is essential.

Choice of model: This will depend on the data representation and the application. Overly complex models tend to lead to problems with learning.
Learning algorithm: There are numerous tradeoffs between learning algorithms. Almost any algorithm will work well with the correct hyperparameters for training on a particular fixed dataset. However selecting and tuning an algorithm for training on unseen data requires a significant amount of experimentation.
Robustness: If the model, cost function and learning algorithm are selected appropriately the resulting ANN can be extremely robust.
With the correct implementation ANNs can be used naturally in online learning and large dataset applications. Their simple implementation and the existence of mostly local dependencies exhibited in the structure allows for fast, parallel implementations in hardware.

Applications

The utility of artificial neural network models lies in the fact that they can be used to infer a function from observations. This is particularly useful in applications where the complexity of the data or task makes the design of such a function by hand impractical.

Real life applications

The tasks to which artificial neural networks are applied tend to fall within the following broad categories:

Function approximation, or regression analysis, including time series prediction, fitness approximation and modeling.
Classification, including pattern and sequence recognition, novelty detection and sequential decision making.
Data processing, including filtering, clustering, blind source separation and compression.
Robotics, including directing manipulators, Computer numerical control.
Application areas include system identification and control (vehicle control, process control), quantum chemistry, game-playing and decision making (backgammon, chess, racing), pattern recognition (radar systems, face identification, object recognition and more), sequence recognition (gesture, speech, handwritten text recognition), medical diagnosis, financial applications (automated trading systems), data mining (or knowledge discovery in databases, "KDD"), visualization and e-mail spam filtering.

Neural network software

Neural network software is used to simulate, research, develop and apply artificial neural networks, biological neural networks and in some cases a wider array of adaptive systems. See also logistic regression.

Types of neural networks

Feedforward neural network

The feedforward neural network was the first and arguably simplest type of artificial neural network devised. In this network, the information moves in only one direction, forward, from the input nodes, through the hidden nodes (if any) and to the output nodes. There are no cycles or loops in the network.

Radial basis function (RBF) network

Radial Basis Functions are powerful techniques for interpolation in multidimensional space. A RBF is a function which has built into a distance criterion with respect to a center. Radial basis functions have been applied in the area of neural networks where they may be used as a replacement for the sigmoidal hidden layer transfer characteristic in Multi-Layer Perceptrons. RBF networks have two layers of processing: In the first, input is mapped onto each RBF in the 'hidden' layer. The RBF chosen is usually a Gaussian. In regression problems the output layer is then a linear combination of hidden layer values representing mean predicted output. The interpretation of this output layer value is the same as a regression model in statistics. In classification problems the output layer is typically a sigmoid function of a linear combination of hidden layer values, representing a posterior probability. Performance in both cases is often improved by shrinkage techniques, known as ridge regression in classical statistics and known to correspond to a prior belief in small parameter values (and therefore smooth output functions) in a Bayesian framework.

RBF networks have the advantage of not suffering from local minima in the same way as Multi-Layer Perceptrons. This is because the only parameters that are adjusted in the learning process are the linear mapping from hidden layer to output layer. Linearity ensures that the error surface is quadratic and therefore has a single easily found minimum. In regression problems this can be found in one matrix operation. In classification problems the fixed non-linearity introduced by the sigmoid output function is most efficiently dealt with using iteratively re-weighted least squares.




RBF networks have the disadvantage of requiring good coverage of the input space by radial basis functions. RBF centres are determined with reference to the distribution of the input data, but without reference to the prediction task. As a result, representational resources may be wasted on areas of the input space that are irrelevant to the learning task. A common solution is to associate each data point with its own centre, although this can make the linear system to be solved in the final layer rather large, and requires shrinkage techniques to avoid overfitting.
Associating each input datum with an RBF leads naturally to kernel methods such as Support Vector Machines and Gaussian Processes (the RBF is the kernel function). All three approaches use a non-linear kernel function to project the input data into a space where the learning problem can be solved using a linear model. Like Gaussian Processes, and unlike SVMs, RBF networks are typically trained in a Maximum Likelihood framework by maximizing the probability (minimizing the error) of the data under the model. SVMs take a different approach to avoiding overfitting by maximizing instead a margin. RBF networks are outperformed in most classification applications by SVMs. In regression applications they can be competitive when the dimensionality of the input space is relatively small.

Kohonen self-organizing network

The self-organizing map (SOM) invented by Teuvo Kohonen performs a form of unsupervised learning. A set of artificial neurons learn to map points in an input space to coordinates in an output space. The input space can have different dimensions and topology from the output space, and the SOM will attempt to preserve these.

Recurrent network

Contrary to feedforward networks, recurrent neural networks (RNs) are models with bi-directional data flow. While a feedforward network propagates data linearly from input to output, RNs also propagate data from later processing stages to earlier stages.
Simple recurrent network
simple recurrent network (SRN) is a variation on the Multi-Layer Perceptron, sometimes called an "Elman network" due to its invention by Jeff Elman. A three-layer network is used, with the addition of a set of "context units" in the input layer. There are connections from the middle (hidden) layer to these context units fixed with a weight of one. At each time step, the input is propagated in a standard feed-forward fashion, and then a learning rule (usually back-propagation) is applied. The fixed back connections result in the context units always maintaining a copy of the previous values of the hidden units (since they propagate over the connections before the learning rule is applied). Thus the network can maintain a sort of state, allowing it to perform such tasks as sequence-prediction that are beyond the power of a standard Multi-Layer Perceptron.
In a fully recurrent network, every neuron receives inputs from every other neuron in the network. These networks are not arranged in layers. Usually only a subset of the neurons receive external inputs in addition to the inputs from all the other neurons, and another disjunct subset of neurons report their output externally as well as sending it to all the neurons. These distinctive inputs and outputs perform the function of the input and output layers of a feed-forward or simple recurrent network, and also join all the other neurons in the recurrent processing.
Hopfield network
The Hopfield network is a recurrent neural network in which all connections are symmetric. Invented by John Hopfield in 1982, this network guarantees that its dynamics will converge. If the connections are trained using Hebbian learning then the Hopfield network can perform as robust content-addressable (or associative) memory, resistant to connection alteration.
Echo state network
The echo state network (ESN) is a recurrent neural network with a sparsely connected random hidden layer. The weights of output neurons are the only part of the network that can change and be learned. ESN are good to (re)produce temporal patterns.
Long short term memory network
The Long short term memory is an artificial neural net structure that unlike traditional RNNs doesn't have the problem of vanishing gradients. It can therefore use long delays and can handle signals that have a mix of low and high frequency components.

Stochastic neural networks

A stochastic neural network differs from a typical neural network because it introduces random variations into the network. In a probabilistic view of neural networks, such random variations can be viewed as a form of statistical sampling, such as Monte Carlo sampling.
Boltzmann machine
The Boltzmann machine can be thought of as a noisy Hopfield network. Invented by Geoff Hinton and Terry Sejnowski in 1985, the Boltzmann machine is important because it is one of the first neural networks to demonstrate learning of latent variables (hidden units). Boltzmann machine learning was at first slow to simulate, but the contrastive divergence algorithm of Geoff Hinton (circa 2000) allows models such as Boltzmann machines and products of experts to be trained much faster.

Modular neural networks

Biological studies have shown that the human brain functions not as a single massive network, but as a collection of small networks. This realization gave birth to the concept of modular neural networks, in which several small networks cooperate or compete to solve problems.
Committee of machines
A committee of machines (CoM) is a collection of different neural networks that together "vote" on a given example. This generally gives a much better result compared to other neural network models. Because neural networks suffer from local minima, starting with the same architecture and training but using different initial random weights often gives vastly different networks. A CoM tends to stabilize the result.
The CoM is similar to the general machine learning bagging method, except that the necessary variety of machines in the committee is obtained by training from different random starting weights rather than training on different randomly selected subsets of the training data.
Associative neural network (ASNN)
The ASNN is an extension of the committee of machines that goes beyond a simple/weighted average of different models. ASNN represents a combination of an ensemble of feed-forward neural networks and the k-nearest neighbor technique (kNN). It uses the correlation between ensemble responses as a measure of distance amid the analyzed cases for the kNN. This corrects the bias of the neural network ensemble. An associative neural network has a memory that can coincide with the training set. If new data become available, the network instantly improves its predictive ability and provides data approximation (self-learn the data) without a need to retrain the ensemble. Another important feature of ASNN is the possibility to interpret neural network results by analysis of correlations between data cases in the space of models. The method is demonstrated at www.vcclab.org, where you can either use it online or download it.

Other types of networks

These special networks do not fit in any of the previous categories.
Holographic associative memory
Holographic associative memory represents a family of analog, correlation-based, associative, stimulus-response memories, where information is mapped onto the phase orientation of complex numbers operating.
Instantaneously trained networks
Instantaneously trained neural networks (ITNNs) were inspired by the phenomenon of short-term learning that seems to occur instantaneously. In these networks the weights of the hidden and the output layers are mapped directly from the training vector data. Ordinarily, they work on binary data, but versions for continuous data that require small additional processing are also available.
Spiking neural networks
Spiking neural networks (SNNs) are models which explicitly take into account the timing of inputs. The network input and output are usually represented as series of spikes (delta function or more complex shapes). SNNs have an advantage of being able to process information in the time domain (signals that vary over time). They are often implemented as recurrent networks. SNNs are also a form of pulse computer.
Spiking neural networks with axonal conduction delays exhibit polychronization, and hence could have a very large memory capacity.
Networks of spiking neurons — and the temporal correlations of neural assemblies in such networks — have been used to model figure/ground separation and region linking in the visual system (see, for example, Reitboeck et al.in Haken and Stadler: Synergetics of the Brain. Berlin, 1989).
In June 2005 IBM announced construction of a Blue Gene supercomputer dedicated to the simulation of a large recurrent spiking neural network.
Gerstner and Kistler have a freely available online textbook on Spiking Neuron Models.
Dynamic neural networks
Dynamic neural networks not only deal with nonlinear multivariate behaviour, but also include (learning of) time-dependent behaviour such as various transient phenomena and delay effects.
Cascading neural networks
Cascade-Correlation is an architecture and supervised learning algorithm developed by Scott Fahlman and Christian Lebiere. Instead of just adjusting the weights in a network of fixed topology, Cascade-Correlation begins with a minimal network, then automatically trains and adds new hidden units one by one, creating a multi-layer structure. Once a new hidden unit has been added to the network, its input-side weights are frozen. This unit then becomes a permanent feature-detector in the network, available for producing outputs or for creating other, more complex feature detectors. The Cascade-Correlation architecture has several advantages over existing algorithms: it learns very quickly, the network determines its own size and topology, it retains the structures it has built even if the training set changes, and it requires no back-propagation of error signals through the connections of the network. See: Cascade correlation algorithm.
Neuro-fuzzy networks
A neuro-fuzzy network is a fuzzy inference system in the body of an artificial neural network. Depending on the FIS type, there are several layers that simulate the processes involved in a fuzzy inference like fuzzification, inference, aggregation and defuzzification. Embedding an FIS in a general structure of an ANN has the benefit of using available ANN training methods to find the parameters of a fuzzy system.
Compositional pattern-producing networks
Compositional pattern-producing networks (CPPNs) are a variation of ANNs which differ in their set of activation functions and how they are applied. While typical ANNs often contain only sigmoid functions (and sometimes Gaussian functions), CPPNs can include both types of functions and many others. Furthermore, unlike typical ANNs, CPPNs are applied across the entire space of possible inputs so that they can represent a complete image. Since they are compositions of functions, CPPNs in effect encode images at infinite resolution and can be sampled for a particular display at whatever resolution is optimal.
One-shot associative memory
This type of network can add new patterns without the need for re-training. It is done by creating a specific memory structure, which assigns each new pattern to an orthogonal plane using adjacently connected hierarchical arrays. The network offers real-time pattern recognition and high scalability, it however requires parallel processing and is thus best suited for platforms such as Wireless sensor networks (WSN), Grid computing, and GPGPUs.

Theoretical properties

Computational power

The multi-layer perceptron (MLP) is a universal function approximator, as proven by the Cybenko theorem. However, the proof is not constructive regarding the number of neurons required or the settings of the weights.
Work by Hava Siegelmann and Eduardo D. Sontag has provided a proof that a specific recurrent architecture with rational valued weights (as opposed to the commonly used floating point approximations) has the full power of a Universal Turing Machine using a finite number of neurons and standard linear connections. They have further shown that the use of irrational values for weights results in a machine with super-Turing power.

Capacity

Artificial neural network models have a property called 'capacity', which roughly corresponds to their ability to model any given function. It is related to the amount of information that can be stored in the network and to the notion of complexity.

Convergence

Nothing can be said in general about convergence since it depends on a number of factors. Firstly, there may exist many local minima. This depends on the cost function and the model. Secondly, the optimization method used might not be guaranteed to converge when far away from a local minimum. Thirdly, for a very large amount of data or parameters, some methods become impractical. In general, it has been found that theoretical guarantees regarding convergence are an unreliable guide to practical application.

Generalisation and statistics

In applications where the goal is to create a system that generalises well in unseen examples, the problem of overtraining has emerged. This arises in overcomplex or overspecified systems when the capacity of the network significantly exceeds the needed free parameters. There are two schools of thought for avoiding this problem: The first is to use cross-validation and similar techniques to check for the presence of overtraining and optimally select hyperparameters such as to minimize the generalisation error. The second is to use some form of regularisation. This is a concept that emerges naturally in a probabilistic (Bayesian) framework, where the regularisation can be performed by selecting a larger prior probability over simpler models; but also in statistical learning theory, where the goal is to minimize over two quantities: the 'empirical risk' and the 'structural risk', which roughly correspond to the error over the training set and the predicted error in unseen data due to overfitting.
Confidence analysis of a neural network
Supervised neural networks that use an MSE cost function can use formal statistical methods to determine the confidence of the trained model. The MSE on a validation set can be used as an estimate for variance. This value can then be used to calculate the confidence interval of the output of the network, assuming a normal distribution. A confidence analysis made this way is statistically valid as long as the output probability distribution stays the same and the network is not modified.
By assigning a softmax activation function on the output layer of the neural network (or a softmax component in a component-based neural network) for categorical target variables, the outputs can be interpreted as posterior probabilities. This is very useful in classification as it gives a certainty measure on classifications.
The softmax activation function: y_i=frac{e^{x_i}}{sum_{j=1}^c e^{x_j}}