Thursday, October 31, 2013

Security Specialists Careers

Network security specialists also manage the compromise of sensitive information concerning cyber attacks including viruses, worms, and other destructive software devices that are able to get through firewalls. Anti-virus software continues to get more and more sophisticated as computer predators get more creative and more destructive.

A network security specialist often conducts other kinds of security measures. New laws related to the U.S. Department of Homeland Security require U.S. businesses to monitor the electronic communications of their employees including emails, instant messaging and phone calls. Monitoring software exists that flags words such as "bomb," "kill" and "drugs" to allow individuals in computer careers such as this to investigate the context of employees' communications that include such words. You may be required to collect data related to security-related incidents and investigations.

The job functions of a network security specialist are often broad to fulfill the unique needs of a particular office. Although specialization improves your marketability, you will do well to take additional coursework and gain a broad range of computer skills so that you are more adaptable to industry changes. Crossover also exists in many computer-related jobs. The additional training is helpful, especially if you perform additional functions outside the industry standard. It is also helpful in your interaction with your co-workers in other computer careers. Your knowledge of business functions is also helpful in this profession.

Wednesday, October 30, 2013

Network Security Specialist

A network security specialist, or a computer security specialist, is a kind of computer administrator that specializes in protecting a company's data and other information. Network security specialists build firewalls, install anti-virus software on servers and computers within a network, and monitor networks for breaches in security. Individuals in computer careers such as this that specialize in one kind of systems maintenance usually work for large companies or organizations with particularly sensitive data, such as investment firms, insurance companies and government agencies.

The network security field is quickly growing right now as more businesses and individuals are storing sensitive data electronically. In addition to the technical roles of this position, a network security specialist often provides training to general staff regarding security issues, and he may develop related company policies such as security matters related to accessing company information using a smart phone, a mobile laptop, or home computer and transmitting information using a thumb drive or an external online data storage service. Often a network security specialist will help facilitate authorized mobile access of company information so that proper security measures can be in place.

Breaches of confidential information stored by a business or organization can be disastrous. If the breach involves unauthorized access to personal information, it can lead to legal problems in terms of liabilities. And if the blueprint of a new product a business plans to launch fall into the wrong hands, it can jeopardize a company's competitive advantage. As a result, network security is drawing more and more attention by executive management.

Laws govern industries regarding the way they store and access personal information of clients, patients or customers. Personal information includes Social Security numbers, personal financial information, medical information and personnel information. It may be as simple as names, addresses and phone numbers, or it may be psychographic market research data collected that customers are not even aware exists. Individuals in computer careers such as this are often required to pass extensive background checks. Their position also requires good judgment and the utmost discretion.

Tuesday, October 29, 2013

NETWORK PROTOCOLS

To avoid chaos in computer communications, rules must be established for the exchange of data from one site to another. These rules are known as line protocol. Communications software packages control the speed and mode of communications between computer systems.

Many different standard network protocols exist to perform addressing, routing, and packetizing. All provide formal definitions for how addressing and routing is to be executed, and specify packet structures to transfer this information between computers.OSI, TCP/IP, IPX/SPX, and X.25 are commonly used routing protocols.

Open Systems Interconnection (OSI):
A major problem of early networked computer systems was that a lack of consistency existed among the protocols of different types of computers. Consequently, various efforts have resulted in the establishment of standards for data transmission protocols. For example, the International Standards Organization (ISO) developed a set of standard protocols called the Open Systems Interconnection (OSI). The OSI model separates each network's functions into seven layers of protocols, or communication rules. This model identifies functions that should be offered by any network system.It is important to note that the physical layer, data link layer, and network layer appear in the user and host computers as well as units such as the front-end processor and the cluster control unit. The remaining layers appear only in the user and host computers.

TCP/IP:
TCP/IP (Transmission Control Protocol/Internet Protocol) is a set of communications protocols developed for internetworking dissimilar systems. This is supported by many hardware vendors from microcomputers to mainframes. It is used by most universities, federal governments, and many corporations. TCP/IP has two parts. TCP protocol controls data transfer that is the function of the transport layer in the OSI model. IP protocol provides the routing and addressing mechanism that are the roles of the network layer in the OSI model.
The TCP/IP may be the oldest networking standard, and is also the most popular network protocol, used by almost 50 percent of all installed backbone, MAN (metropolitan area network), and WAN (wide area networks). TCP/IP is widely compatible with many other protocols. Although TCP/IP supports many protocols, it is usually associated with Ethernet. TCP/IP is also the network protocol used on the Internet.

IPX/SPX:
IPX/SPX (lnternetwork Packet Exchange/Sequenced Packet Exchange) is a Novell NetWare communications protocol used to route messages from one end to another. It is the major network protocol used by Novell NetWare, and about 40 percent of all installed LAN (local area networks) use this protocol.
IPX/SPX has two parts, and is similar to TCP/IP. SPX controls the transport layer in the OSI model. It guarantees that an entire message arrives intact. IPX manages the role of the network layer in the OSI model and is used as delivery mechanism for SPX. IPX/SPX can be linked with many other protocols.

X.25:
X.25 is a CCITT standard developed by ITU- TSS for WAN (wide area networks). It defines the interface between an end user computer and packet switching network. This is an international standard used for many worldwide corporations. It also has two parts. Packet layer protocol (PLP) is the routing protocol that manages the network layer and X.3 controls the transport layer.

Monday, October 28, 2013

Network Architecture

A Network is a conceptual framework that describes how data and network information are communicated from an application on one computer through network media to an application on other computers in terms of different layers. Network architecture is also known as Reference model. There mainly two classifications of Reference models and are Open and closed.

Open model is one which is open for everyone and no secrecy is there. In a closed model, also known as proprietary system the architecture is kept secret from users. OSI model is an open model while IBM's SNA 7 layer model is a closed system. Here Let's discuss two main reference models OSI model and TCP/IP model.

TCP/IP Model (Internet Architecture):
  • TCP/IP ( Transmission Control Protocol/Internet Protocol) defines a large collection of protocols that allow computers to communicate.It has a 4 layer architecture. 
  • TCP/ IP defines each of these protocols inside document called Requests For Comments (RFCs). 
  • By implementing the required protocols in TCP/IP RFCs, a computer can be relatively confident that it can communicate with other computers that also implement TCP/IP.

Sunday, October 27, 2013

Routing Algorithms

Non-Hierarchical Routing:

In this type of routing, interconnected networks are viewed as a single network, where bridges, routers and gateways are just additional nodes.Every node keeps information about every other node in the network
In case of adaptive routing, the routing calculations are done and updated for all the nodes.The above two are also the disadvantages of non-hierarchical routing, since the table sizes and the routing calculations become too large as the networks get bigger. So this type of routing is feasible only for small networks.

Hierarchical Routing:

This is essentially a 'Divide and Conquer' strategy. The network is divided into different regions and a router for a particular region knows only about its own domain and other routers. Thus, the network is viewed at two levels:

The Sub-network level, where each node in a region has information about its peers in the same region and about the region's interface with other regions. Different regions may have different 'local' routing algorithms. Each local algorithm handles the traffic between nodes of the same region and also directs the outgoing packets to the appropriate interface.The Network Level, where each region is considered as a single node connected to its interface nodes. The routing algorithms at this level handle the routing of packets between two interface nodes, and is isolated from intra-regional transfer.Networks can be organized in hierarchies of many levels; e.g. local networks of a city at one level, the cities of a country at a level above it, and finally the network of all nations.

In Hierarchical routing, the interfaces need to store information about:
  • All nodes in its region which are at one level below it.
  • Its peer interfaces.
  • At least one interface at a level above it, for outgoing packages.
Advantages of Hierarchical Routing :
  • Smaller sizes of routing tables.
  • Substantially lesser calculations and updates of routing tables.
Disadvantage :

Once the hierarchy is imposed on the network, it is followed and possibility of direct paths is ignored. This may lead to sub optimal routing.

Saturday, October 26, 2013

Flow Models

A computer network is comprised of nodes corresponding to network elements such as workstations,routers and switches and links that connect those elements. A network now contains all the traffic originating at a node and destined for some other node in the network. Each how can in principle traverse a set of paths connecting its origin and destination, which is determined by the routing policy.

In computer networks, the low traffic is carried on packets, whose payload is expressed in bytes, while on road networks, the traffic is carried on vehicles. The volume of traffic measured on a link may refer to either the number of packets and/or the number of bytes in computer networks, and such data for a particular time interval -typically of the order of a couple of minutes- are available through queries using the Simple Network Management Protocol (SNMP) protocol.

 The volume of traffic on a 1link is the sum of volumes of all rows traversing that link. This produces highly aggregate data and the question of interest is to estimate various statistics of the underlying network.

Friday, October 25, 2013

Dynamic Network Mapping

For enterprise networks, static maps such as Visio diagrams take a significant amount of effort to create and they can become obsolete quickly. Dynamic mapping is the next generation mapping automation technology, featuring:

  • Data-driven map automation – maps with rich details can be created instantly
  • On-demand map creation - each map is customized for the task at hand
  • Automatically updated – when the live network changes, maps can be updated accordingly

Automated Network Documentation:
  • To automate network documentation, NetBrain leverages a state-of-the-art discovery engine that discovers both network topology and the network design underneath. Any network change will be automatically captured by recurring discovery.
  • Network documentation is available in the following formats:
  • Diagrams in Visio format
  • Design documents in Word format
  • Inventory reports in Excel format

Map-Driven Network Troubleshooting:

Instead of typing commands into the CLI to figure out what’s happening, you can troubleshoot complex network problems in a map-driven environment from beginning to end. With this unique map-driven troubleshooting tool, you can:
  • Map a problem area instantly through on-demand mapping technology.
  • Visualize performance hotspots and up/down status directly through a color-coded dynamic map.
  • Analyze what’s changed in topology, routing, configuration and traffic flow.
  • Run Automation Procedures to immediately find errors and discrepancies.

Thursday, October 24, 2013

Network mapping Tools for Linux

There are several things that might do what you want. I've network mapping tools for Linux as below:
  • Nagios/Nagvis
  • Mila_Ajax_Map
  • Safe Mapping and Reporting Tool (SMART)
  • Network Scanner
  • NMap Console
  • Oggle Network Mapping and Display Tool
  • Prime
  • CartoReso
  • OpenMapper
  • OSPF network visualizer
  • netfuse
  • Ajax Network Map
  • Network Administration Visualized
  • Advanced Network Topology and Inventory 
If, however, you want an easy life, you should probably check out the packages that your distro offers. Depending on the repos that you have enabled, you should get a list something like this:
  • lanmap
  • netdude
  • netmrg
  • zabbix
Looking at the names, netdude seems an attractive prospect, as the possibility is strong that one of these dudes was inspired by the other, in some way.Lanmap is similar to The Dude; it creates a graphical layout of your network.

Wednesday, October 23, 2013

Internet Mapping

Network mapping, otherwise known as Internet mapping, is a group of tasks used to study Internet connectivity and determine how network systems are operated. In effect, network mapping develops visual materials that can be used for a large variety of purposes, ranging from business to national security. Network mapping makes use of software to identify operating systems and other technical information, but can also provide a better overall understanding of how different networks operate.

Basic network mapping tasks include flow charts, network diagrams, and device inventories. More advanced techniques, such as active probing, can be used to create network maps and to analyze the network and its processes further. Active probing gathers information on the system by sending probe packets into the network. After probes are released, they report back information on the IP details of the network. This information can be used to determine how the networks operate, which can then be used to map the system.

Networks are now a fast growing system even outside the corporate world, making network mapping such a valuable concept. These networks tend to be overwhelmingly complex, however, especially as they grow larger and involve different devices and connections. Network mapping deciphers these complex networks and breaks them down into segments that are more easily understandable. As mapping takes place, network systems can be visualized to communicate how the network operates.

Tuesday, October 22, 2013

Delay Requirements In Network

In a rapid communication development, the wireless and embedded technologies have created enormous innovations towards mobile environment. In that, Delay Tolerant Network (DTN) is a heterogeneous Network, where there is no proper end to end connection between source and destination. On the other hand, Delay Tolerant Sensor Network (DTSN) is a collection of sensor nodes which consist of transceivers and sensors for traffic monitoring between nodes in the network. Since sensor nodes are battery operated with minimum energy, the information must be conveyed from source to destination without failure even in a critical situation. The traditional routing protocols are inefficient for these mobile based DTSN, since they require existence of connected end to end paths to be able to route any data reliably and energy efficiently to improve the security owing to the poor routing. This paper illustrates an energy efficient routing protocol with the objectives of providing reliable security among nodes in the DTSN. In addition, the proposed routing method also has the intension to enhance the Quality of Service (QoS) by reducing transmission time, packet loss and delayed response through balancing energy consumption among nodes in the networks. Comparative analysis has also been done and the proposed Energy Efficient Routing Protocol (EERP) outperforms the previous method in terms of Quality of service.

The highly successful architecture and protocols of today’s Internet may operate poorly in environments characterized by very long delay paths and frequent network partitions. Delay Tolerant Networks (DTNs) are emerging solutions to networks that experience frequent network partitions and large end-to-end delays. In this paper, we study how to provide high-performance routing technique in DTNs. We develop a multicasting mechanism based on requirement of path discovery and overall situation awareness of link availability to address the challenges of opportunistic link connectivity in DTNs. Simulation results show that this method can achieve a better message delivery ratio than existing approaches, e.g. DTBR (a dynamic tree-based routing), with similar delay performance. ERBR approach also achieves better efficiency performance when the probability of link unavailability is high and the duration of link downtime is large.

Monday, October 21, 2013

Thresholds - RMA

The two types of thresholds: general and environment specific.General thresholds are those that apply to most or all networks. They are rules of thumb that have been determined via experience to work in most environments. They are applied when there are no environment-specific thresholds to use. Environment-specific thresholds are determined for the environment of the current network project on which you are working. They are specific to that environment and typically are not applicable to other networks. These thresholds are useful in distinguishing between low and high performance for the network.

Reliability

Reliability is a statistical indicator of the frequency of failure of the network and its components and represents the unscheduled outages of service. A measure of reliability is in MTBCF, usually expressed in hours. A related measure is the MTBF, which considers all failures, regardless of their significance at the time of failure, and is a conservative approximation, useful in simple systems. MTBF can confuse the designer of a complex system. As systems become more complex and resource limitations restrict the degree of redundancy or the purchase of higher-reliability components, the use of MTBCF becomes more illuminating, although it does take more careful analysis, focusing on the system performance during specific mission scenarios. MTBF is computed as the inverse of the failure rate, which is estimated through testing.

Reliability, Availability and Maintainability (RAM) studies are typically associated with the defense and aerospace industries. However, a number of studies have been performed for refining, petrochemical, offshore, and power generation facilities.

The application of relatively inexpensive but powerful RAM forecasting tools can provide a number of benefits to the owners and operators of oil refineries, gas plants, chemical plants, and other processing facilities.

Those benefits can include:
  • Reducing maintenance and sparing costs while maintaining and/or increasing production levels.
  • Optimizing capital investment for reducing the cost of production.
  • A decrease in the duration of unplanned and planned outages.
  • Alignment of maintenance resources based on the criticality of equipment to production revenue.
  • Accurate forecasts of equipment life cycle costs that reflect equipment age, duty cycle, and maintenance effectiveness.
  • Optimization of capital improvement options at the plant and enterprise levels when improvement budgets are constrained.

Sunday, October 20, 2013

Application/Bandwidth Requirements

Enterprise IT managers must continually manage costs and maintain reliable WAN infrastructures to meet their business goals. But, success in today’s business climate also depends on the ability to overcome a more complex set of challenges to their corporate WAN. Enterprise IT managers are faced with the following:

Geographically dispersed sites and teams that must share information across the network and have secure access to networked corporate resources.
  • Mission-critical, distributed applications that must be deployed and managed on a network-wide basis. Furthermore, IT managers are faced with a combination of centralized hosted applications and distributed applications, which complicates the management task.
  • Security requirements for networked resources and information that must be reliably available but protected from unauthorized access.
  • Business-to-business communication needs, for users within the company and extending to partners and customers. QoS features that ensure end-to-end application performance.
  • Support for the convergence of previously disparate data, voice, and video networks resulting in cost savings for the enterprise.
  • Security and privacy equivalent to Frame Relay and ATM.
  • Easier deployment of productivity-enhancing applications, such as enterprise resource planning (ERP), e-learning, and streaming video. (These productivity-enhancing applications are IP-based, and Layer 2 VPNs do not provide the basis to support these applications.)
  • Pay-as-you-go scalability as companies expand, merge, or consolidate.
  • Flexibility to support thousands of sites.

Saturday, October 19, 2013

Measuring Network Performance

Given these performance indicators, the next step is to determine how these indicators may be measured, and how the resulting measurements can be meaningfully interpreted. At this point it is useful to look at numerous popular network management and measurement tools and examine their ability to provide useful measurements. There are two basic approaches to this task; one is to collect management information from the active elements of the network using a management protocol, and from this information make some inferences about network performance.

This can be termed a passive approach to performance measurement, in that the approach attempts to measure the performance of the network without disturbing its operation. The second approach is to use an active approach and inject test traffic into the network and measure its performance in some fashion, and relate the performance of the test traffic to the performance of the network in carrying the normal payload.

Measuring Performance with SNMP

In IP networks the ubiquitous network management tool is the Simple Network Management Protocol (SNMP).The operation of SNMP is a polling operation, where a management station directs periodic polls to various managed elements and collects the responses. These responses are used to update a view of the operating status of the network.

The most basic tool for measuring network performance is the periodic measurement of the interface byte counters. Such measurements can provide a picture of the current traffic levels on the network link, and when related to the total capacity of the link, the relative link loading level can be provided. As a performance indicator this relative link loading level can provide some indication of link performance, in that a relatively lightly loaded link would normally indicate a link that has no significant performance implications, whereas a link operating at 100 percent of total available capacity would likely be experiencing high levels of packet drop, queuing delay, and potentially a high jitter level. In between these two extremes there are performance implications of increasing the load. Of course it should be noted that the characteristics of the link have a bearing on the interpretation of the load levels, and a low-latency 10-Gbps link operating at 90-percent load will have very significantly lower levels of performance degradation than a 2-Mbps high-latency link under the same 90-percent load.

Friday, October 18, 2013

Requirements for Device Networking

The protocol stack must recover from intermittent packet loss quickly via packet retransmission or report a message failure to the application.
Rationale: The sorts of links that these networks run on have very low bandwidth compared to Ethernet, and unlike Ethernet the links are not nearly as reliable. Packets can be lost due to interference and noise as well as collisions. When these events occur, they are relatively frequent so that more bandwidth is consumed to recover from the loss using retransmission. Second, because these systems typically have real-time constraints, delivering the packet late is not desirable.

It must be possible to engineer the control network so that the real-time requirements of the application are met. This involves:
Designing the network to meet response time criteria by limiting the number of nodes per link, and tuning the communications so the network will not become overloaded.Specifying that a given communications transaction will either succeed or fail within a specified time, with the success or failure of that transaction known to the application.
Rationale: In a control system, a late packet would result in some node not doing its function in synchronization with the other nodes.

The protocol stack must implement all communications services that are needed by all nodes — without diminishing any of the services. Therefore, the protocol stack must be compact so that the control application also has adequate RAM. To put this requirement in perspective, currently available devices that combine a microcontroller and an IEEE 802.15.4 radio typically have 8–12kb of RAM to share between the protocol stack and the application. Most developers expect to be able to use most of this RAM for their applications.
Rationale: In the world of low-cost systems-on-a-chip (SOCs), RAM is the most precious resource. In control networking applications, it is needed for buffers, to maintain state and know when to resend a packet, to detect a duplicate packet, to put packets in correct order for delivery, etc. These draws on the memory are in direct competition with the needs of the application. Given cost pressures, the use of SOCs is a reasonable solution.

The protocol stack must be independent of the underlying MAC/PHY interface.
Rationale: There is no single solution for all communication needs among all devices. Multiple RF, power line, and a variety of wired links are needed to implement various applications. Further, transceiver design continually evolves and improves, so the protocol stack must be able to take advantage of new technologies as they become available.

The protocol stack must scale to thousands of nodes and to multiple links of different speeds in a single logical network.
Rationale: Many building and factory systems are composed of well over 1,000 nodes that use many types of links to a high-speed backbone.

Network wide multicast, with multicast group membership, must be supported. Grouping provides that all applications do not see and consume resources for multicasts not pertaining to them, other than to discard the packet at a low layer in the stack.
Rationale: Multicast conserves bandwidth and improves response time over multiple, serial unicast messages. When closing a control loop over a network it is sometimes critical that all nodes that subscribe to a sensor value get that value very close to the same time. Applications cannot require that all the nodes in their group are on a common link because some messages, such as emergency messages, must go to most, or even all the nodes on the network. A node does not have the memory to process all multicasts just to discover which ones do not apply to it.

Confirmed, network wide multicast must be supported.
Rationale: In applications where the message must get through or a major equipment shutdown is required the sending node must be able to have confirmation that its message was received by all the members of the multicast group.

The protocol stack must support duplicate packet detection and resend the previously generated response without reprocessing or regenerating it.
Rationale: Many types of packets should be ignored. For example, let's say a utility customer on a pre-pay contract adds money to their account, and the additional credit is transferred to their meter, but the meter’s acknowledgement packet is lost. So the utility re-sends the add credit message. Correct behavior would be for the meter to only add the credit one time.

The protocol stack must support a mechanism that allows emergency messages to be routed in an expedited manner, bypassing router and node queues.
Rationale: In control networks, sometimes all nodes respond to an event (for example, an oil refinery is about to catch fire), which causes a flood of messages. Not all those messages are crucial to resolving the problem, but the messages that are must be propagated quickly across the network.

The protocol stack must ensure that packets are received in the order they are sent.
Rationale: There are many control operations that depend on a sequence to prevent damage to equipment or simply to work correctly.ger pointing that can be more painful than any acceptance test.

Thursday, October 17, 2013

Network Supportability

The ability of the customer to sustain the required level of performance (that architected and designed into the network) over the entire life cycle of the network is an area of networking that is often neglected. It is a mistake to assume that a successful network architecture and design meets the requirements only on the day that it is delivered to the customer and that future requirements are the responsibility of the customer.
Experience indicates that 80% of the life cycle costs of a system are the operations and support costs, and only 20% is the cost incurred to develop, acquire, and install it. 
 
Good network architects /designers will take into account the major factors that affect operability and supportability as they make their decisions. Knowledgeable customers will insist that they understand the operations and support implications of a network architecture and design. At times, such issues may be of more concern than the feasibility of a new technology.

The postimplementation phases of a network's life cycle can be broken into three elements: operations, maintenance, and human knowledge. The operations element focuses on ensuring that the network and system are properly operated and managed and that any required maintenance actions are identified. The maintenance element focuses on preventive and corrective maintenance and the parts , tools, plans, and procedures for accomplishing these functions. The human knowledge element is the set of documentation, training, and skilled personnel required to operate and maintain the network and system. Design decisions affect each of these factors and have a direct impact on the ability of the customer to sustain the high level of service originally realized upon implementation of the network.

Failure to consider supportability in the analysis, architecture, and design processes has a number of serious consequences. First, a smart customer, when faced with a network architecture/design that obviously cannot be operated or maintained by his or her organization, will reject the network project or refuse to pay for it. Second, a customer who accepts the architecture/design and subsequent implementation will have inadequate resources to respond to network and system outages, experience unacceptable performance after a period of time, and may suffer adverse effects in his or her operation or business.
 
Other customers will be highly dissatisfied with their networks and either require the architect/designer to return and repair it by providing adequate materials to sustain its required performance level or will prematurely replace it. None of these cases reflects positively on the network architect/designer or implementation team and often lead to finThe scope of requirements for the protocol stack applies to the medium access control (MAC) and Physical Layer (PHY) interface and to the application itself. No attempt is made to mandate that some of the requirements be met in the Network, Transport, or Application layers.

Wednesday, October 16, 2013

Methodology Of Network Design

 Frequently engineers will dive right in and deploy an upgrade or network / system addition without very much fore thought or planning. Then the problems begin. Most problems that occur during a network / system upgrade can be avoided if a little fore thought and planning is done.

Step 1   Define the scope of the project and understand the implications of what is about to be done. Without understanding the implications of what is to be done, there is often undesired results. One must remember that most intelligent network devices communicate with each other. Therefore the consequences of a network addition or upgrade may propagate throughout the network / system.

Step 2    Design the network / system upgrade / addition on a whiteboard. This brainstorming session exposes many of the details that can cause problems later. Remember that with technology, the devil is in the detail. Design includes determining: 

A) The equipment required
B) Cabling
C) Routing implications
D) Logical configuration
E) IP addressing
F) Power requirements

Step 3    Draw the upgrade using Visio. Proposed drawings show detail of what is about to be done. In a large organization, drawings are an effective communications tool. After the next step is done, drawings should be updated to reflect the "As-Built" condition because frequently what is proposed, is not what is built.

Step 4    Deploy it.  After all the above effort is completed, necessary parts and equipment are obtained, the upgrade / addition can finally be deployed (built). It is surprising how smoothly things go when adequate planning is done. Often network upgrades and additions have visibility through-out the organization. If things go right, it's hardly noticed. On the other hand, if a network upgrade goes badly, everybody seems to know about it.

Tuesday, October 15, 2013

Best Practices in Network Monitoring System

A large part of the picture, in addition to monitoring the application data and general network conditions, is the infrastructure supporting the system.Administrators can enhance their visibility and awareness of underlying resources by seeking a solution that will provide several key features in monitoring the network and proactively managing traffic.

The underlying components that support user applications should be monitored to ensure proper application delivery. These devices serve as the foundation of the entire delivery process and must be running at optimal efficiency to maximize overall service delivery.

Base lining gives IT a long-term view and a starting point to assess when performance is higher or lower than expected. This is an often overlooked capability that can provide valuable insight into behavior over extended periods of time. In particular, observing activity over time windows greater than six months can yield slowly degrading performance that might otherwise be missed.

The user experience isn’t for end users alone. IT should have a single dashboard through which network activities can be monitored and managed, providing an at-a-glance problem identification. This perspective can be the starting point of any potential troubleshooting that may be required.

Video performance is increasingly important as telepresence calls replace face-to-face meetings. Network administrators should validate unified communications performance, including VoIP analysis, to ensure satisfied users. Of particular importance is to understand how the significantly higher utilization in conjunction with high levels of network priority may impact all applications and services operating on the infrastructure.
Resources can be used more efficiently if monitoring activities are conducted entirely by the network probe, which reduces overhead. Therefore, seek those solutions which perform the majority of their processing locally at the probe.

Monday, October 14, 2013

Network Metrics and Monitoring

The first metric is the end user’s page response time. This is a measure of the time required for the original request to be processed. It could be measured by placing a probe near the client to measure the turn time and validate the processing of the request. An example would be the duration from a request for a website and when the content is displayed on the user’s client.

Next is the total number of transactions processed. During a specified time period, measuring the volume of transactions is useful to see if they are too high, which results in transactions being caught in the queue. This in turn leads to errors and causes issues such as the client processing the request again. Excessive transactions can materially impact individual user experience.

Server query response time is also an important metric. This is the application-server side counterpart to page response time. Application-specific visibility is necessary in order to assess detailed response times and view transactions. Continuing with the website example, this might be the time the web server waits until it is able to construct the entire URL contents.

Another advantage of having application-specific visibility is being able to measure traffic flow data. Data can be collected about each conversation showing the flow by application, including such details as packets, bytes, connections and request details. Understanding the traffic flow mix and amount on a network can provide invaluable indirect information into how distinct users may currently perceive usability. Perhaps more important, it can provide insight into trends that may eventually impact users if remedial action is not completed.

Finally, server errors themselves provide useful information. Details of the history of packet captures and application transaction details show server conditions, and this visibility shows when errors result from a higher number of requests than the server can handle. Too many server errors can ultimately impact users’ overall service delivery experience.

Network latency (RTT), server utilization, network availability and bandwidth utilization are additional metrics that provide insight into the user experience. Leveraging to many of any of these or in combination with the above discussed parameters can impact the delivery of services to users.

Sunday, October 13, 2013

Application Layer - OSI Model

At the very top of the OSI Reference Model stack of layers, the application layer. The application layer is the one that is used by network applications. These programs are what actually implement the functions performed by users to accomplish various tasks over the network.

It's important to understand that what the OSI model calls an “application” is not exactly the same as what we normally think of as an “application”. In the OSI model, the application layer provides services for user applications to employ. For example, when you use your Web browser, that actual software is an application running on your PC. It doesn't really “reside” at the application layer. Rather, it makes use of the services offered by a protocol that operates at the application layer, which is called the Hypertext Transfer Protocol (HTTP). The distinction between the browser and HTTP is subtle, but important.

The reason for pointing this out is because not all user applications use the application layer of the network in the same way. Sure, your Web browser does, and so does your e-mail client and your Usenet news reader. But if you use a text editor to open a file on another machine on your network, that editor is not using the application layer. In fact, it has no clue that the file you are using is on the network: it just sees a file addressed with a name that has been mapped to a network somewhere else. The operating system takes care of redirecting what the editor does, over the network.

Similarly, not all uses of the application layer are by applications. The operating system itself can (and does) use services directly at the application layer.

That caveat aside, under normal circumstances, whenever you interact with a program on your computer that is designed specifically for use on a network, you are dealing directly with the application layer. For example, sending an e-mail, firing up a Web browser, or using an IRC chat program—all of these involve protocols that reside at the application layer.

There are dozens of different application layer protocols that enable various functions at this layer. Some of the most popular ones include HTTP, FTP, SMTP, DHCP, NFS, Telnet, SNMP, POP3, NNTP and IRC. Lots of alphabet soup, sorry. J I describe all of these and more in the chapter on higher-layer protocols and applications.

As the “top of the stack” layer, the application layer is the only one that does not provide any services to the layer above it in the stack—there isn't one! Instead, it provides services to programs that want to use the network, and to you, the user. So the responsibilities at this layer are simply to implement the functions that are needed by users of the network. And, of course, to issue the appropriate commands to make use of the services provided by the lower layers.

Saturday, October 12, 2013

Presentation Layer - OSI Model

The presentation layer is layer 6 of the 7-layer Open Systems Interconnection (OSI) model. It is used to present data to the application layer (layer 7) in an accurate, well-defined and standardized format.

The presentation layer is sometimes called the syntax layer.The presentation layer is responsible for the following:
  • Data encryption/decryption
  • Character/string conversion
  • Data compression
  • Graphic handling
The presentation layer mainly translates data between the application layer and the network format. Data can be communicated in different formats via different sources. Thus, the presentation layer is responsible for integrating all formats into a standard format for efficient and effective communication.

The presentation layer follows data programming structure schemes developed for different languages and provides the real-time syntax required for communication between two objects such as layers, systems or networks. The data format should be acceptable by the next layers; otherwise, the presentation layer may not perform correctly.

Network devices or components used by the presentation layer include redirectors and gateways.

Friday, October 11, 2013

Transport Layer Services and Principles

The Internet, and more generally a TCP/IP network, makes available two distinct transport-layer protocols to the application layer. One of these protocols is UDP (User Datagram Protocol), which provides an unreliable, connectionless service to the invoking application. The second of the these protocols is TCP (Transmission Control Protocol), which provides a reliable, connection-oriented service to the invoking application. When designing a network application, the application developer must specify one of these two transport protocols. 

To simplify terminology, when in an Internet context, we refer to the 4-PDU as a segment. We mention, however, that the Internet literature also refers to the PDU for TCP as a segment but often refers to the PDU for UDP as a datagram. But this same Internet literature also uses the terminology datagram for the network-layer PDU! For an introductory book on computer networking such as this one, we believe that it is less confusing to refer to both TCP and UDP PDUs as segments, and reserve the terminology datagram for the network-layer PDU.

Before preceding with our brief introduction of UDP and TCP, it is useful to say a few words about the Internet's network layer. The Internet's network-layer protocol has a name -- IP, which abbreviates "Internet Protocol". IP provides logical communication between hosts. The IP service model is a best-effort delivery service. This means that IP makes its "best effort" to deliver segments between communicating hosts, but it makes no guarantees. In particular,  it does not guarantee segment delivery, it does not guarantee orderly delivery of segments, and it does it guarantee the integrity of the data in the segments. For these reasons, IP is said to be an unreliable service. We also mention here that every host has an IP address. We will examine IP addressing in detail in Chapter 4; for this chapter we need only keep in mind that each host has a unique IP address.

Having taken a glimpse at the IP service model, let's now summarize the service model of UDP and TCP. The most fundamental responsibility of  UDP and TCP is to extend  IP's delivery service between two end systems  to a delivery service  between two processes running on the end systems. Extending  host-to-host delivery to process-to-process delivery is called application multiplexing and demultiplexing. We'll discuss application multiplexing and demultiplexing in the next section. UDP and TCP also provide integrity checking by including error detection fields in its header.  These two minimal transport-layer services --  host-to-host data delivery and error checking -- are the only two services that UDP provides! In particular, like IP, UDP is an unreliable service -- it does not guarantee data sent by one process will arrive in tact to the destination process. UDP is discussed in detail in Section 3.3.

TCP, on the other hand, offers several additional services to applications.. First and foremost, it provides reliable data transfer. Using flow control, sequence numbers, acknowledgments and timers (techniques we'll explore in detail in this Chapter), TCP's guarantee of reliable data transfer ensures that data is delivered from sending process to receiving process, correctly and in order. TCP thus converts  IP's unreliable service between end systems into a reliable data transport service between processes. TCP also uses congestion control. Congestion control is not so much a service provided to the invoking application as it is a service for the Internet as a whole -- a service for the general good. In loose terms, TCP congestion control prevents any one TCP connection from swamping the links and switches between communicating hosts with an excessive amount of traffic. In principle,  TCP permits TCP connections traversing a congested network link to equally share that link's bandwidth. This is done by regulating the rate at which an the sending-side TCPs can send traffic into the network.  UDP traffic, on the other hand, is unregulated. A an application using UDP transport can send traffic at any rate it pleases, for as long as it pleases.

A  protocol that provides reliable data transfer and congestion control is necessarily complex. We will need several sections to cover the principles of reliable data transfer and congestion control, and additional sections to cover the TCP protocol itself. These topics are investigated in Sections 3.4 through 3.8. The approach taken in this chapter is to alternative between the basic principles and the TCP protocol. For example, we first discuss reliable data transfer in a general setting and then discuss how TCP specifically provides reliable data transfer. Similarly, we first discuss congestion control in a general setting and then discuss how TCP uses congestion control. But before getting into all this good stuff, let's first look at application multiplexing and demultiplexing.

Thursday, October 10, 2013

DOD model

This model is sometimes called the DOD model since it was designed for the department of defense It is also called the TCP/IP four layer protocol, or the internet protocol. It has the following layers:
  • Link - Device driver and interface card which maps to the data link and physical layer of the OSI model.
  • Network - Corresponds to the network layer of the OSI model and includes the IP, ICMP, and IGMP protocols.
  • Transport - Corresponds to the transport layer and includes the TCP and UDP protocols.
  • Application - Corresponds to the OSI Session, Presentation and Application layers and includes FTP, Telnet, ping, Rlogin, rsh, TFTP, SMTP, SNMP, DNS, your program, etc.
  • The four layer TCP/IP protocol. Each layer has a set of data that it generates.
  • The Link layer corresponds to the hardware, including the device driver and interface card. The link layer has data packets associated with it depending on the type of network being used such as ARCnet, Token ring or ethernet. In our case, we will be talking about ethernet.
  • The network layer manages the movement of packets around the network and includes IP, ICMP, and IGMP. It is responsible for making sure that packages reach their destinations, and if they don't, reporting errors.
The transport layer is the mechanism used for two computers to exchange data with regards to software. The two types of protocols that are the transport mechanisms are TCP and UDP.

The application layer refers to networking protocols that are used to support various services such as FTP, Telnet, BOOTP, etc. Note here to avoid confusion, that the application layer is generally referring to protocols such as FTP, telnet, ping, and other programs designed for specific purposes which are governed by a specific set of protocols defined with RFC's (request for comments). However a program that you may write can define its own data structure to send between your client and server program so long as the program you run on both the client and server machine understand your protocol. For example when your program opens a socket to another machine, it is using TCP protocol, but the data you send depends on how you structure it.

Wednesday, October 9, 2013

Data Link Layer Definition

The data link layer is the second layer in the OSI seven-layer reference model. It responds to service requests from the network layer above it and issues service requests to the physical layer below it.

The data link layer is responsible for encoding bits into packets prior to transmission and then decoding the packets back into bits at the destination. Bits are the most basic unit of information in computing and communications. Packets are the fundamental unit of information transport in all modern computer networks, and increasingly in other communications networks as well.

The data link layer is also responsible for logical link control, media access control, hardware addressing, error detection and handling and defining physical layer standards. It provides reliable data transfer by transmitting packets with the necessary synchronization, error control and flow control.

The data link layer is divided into two sublayers:
The media access control (MAC) layer and the logical link control (LLC) layer. The former controls how computers on the network gain access to the data and obtain permission to transmit it; the latter controls packet synchronization, flow control and error checking.

The data link layer is where most LAN (local area network) and wireless LAN technologies are defined. Among the most popular technologies and protocols generally associated with this layer are Ethernet, Token Ring, FDDI (fiber distributed data interface), ATM (asynchronous transfer mode), SLIP (serial line Internet protocol), PPP (point-to-point protocol), HDLC (high level data link control) and ADCCP (advanced data communication control procedures).

The data link layer is often implemented in software as a driver for a network interface card (NIC). Because the data link and physical layers are so closely related, many types of hardware are also associated with the data link layer. For example, NICs typically implement a specific data link layer technology, so they are often called Ethernet cards, Token Ring cards, etc. There are also several types of network interconnection devices that are said to operate at the data link layer in whole or in part, because they make decisions about what to do with data they receive by looking at data link layer packets. These devices include most bridges and switches, although switches also encompass functions performed by the network layer.Data link layer processing is faster than network layer processing because less analysis of the packet is required.

Tuesday, October 8, 2013

Physical Layer - OSI Architecture

The Physical Layer is the first and lowest layer in the seven-layer OSI model of computer networking. The implementation of this layer is often termed PHY.

The Physical Layer consists of the basic hardware transmission technologies of a network. It is a fundamental layer underlying the logical data structures of the higher level functions in a network. Due to the plethora of available hardware technologies with widely varying characteristics, this is perhaps the most complex layer in the OSI architecture.

The Physical Layer defines the means of transmitting raw bits rather than logical data packets over a physical link connecting network nodes. The bit stream may be grouped into code words or symbols and converted to a physical signal that is transmitted over a hardware transmission medium. The Physical Layer provides an electrical, mechanical, and procedural interface to the transmission medium. The shapes and properties of the electrical connectors, the frequencies to broadcast on, the modulation scheme to use and similar low-level parameters, are specified here.

Within the semantics of the OSI network architecture, the Physical Layer translates logical communications requests from the Data Link Layer into hardware-specific operations to affect transmission or reception of electronic signals.

Saturday, October 5, 2013

Physical Layer In the Network

Understanding the Role of the Physical Layer
The name “physical layer” can be a bit problematic. Because of that name, and because of what I just said about the physical layer actually transmitting data, many people who study networking get the impression that the physical layer is only about actual network hardware. Some people may say the physical layer is “the network interface cards and cables”. This is not actually the case, however. The physical layer defines a number of network functions, not just hardware cables and cards.

A related notion is that “all network hardware belongs to the physical layer”. Again, this isn't strictly accurate. All hardware must have some relation to the physical layer in order to send data over the network, but hardware devices generally implement multiple layers of the OSI model, including the physical layer but also others. For example, an Ethernet network interface card performs functions at both the physical layer and the data link layer.

Physical Layer Functions
The following are the main responsibilities of the physical layer in the OSI Reference Model:
Definition of Hardware Specifications: The details of operation of cables, connectors, wireless radio transceivers, network interface cards and other hardware devices are generally a function of the physical layer (although also partially the data link layer; see below).

Encoding and Signaling:  
The physical layer is responsible for various encoding and signaling functions that transform the data from bits that reside within a computer or other device into signals that can be sent over the network.

Data Transmission and Reception: 
After encoding the data appropriately, the physical layer actually transmits the data, and of course, receives it. Note that this applies equally to wired and wireless networks, even if there is no tangible cable in a wireless network!

Topology and Physical Network Design: 
The physical layer is also considered the domain of many hardware-related network design issues, such as LAN and WAN topology.In general, then, physical layer technologies are ones that are at the very lowest level and deal with the actual ones and zeroes that are sent over the network. For example, when considering network interconnection devices, the simplest ones operate at the physical layer: repeaters, conventional hubs and transceivers. These devices have absolutely no knowledge of the contents of a message. They just take input bits and send them as output. Devices like switches and routers operate at higher layers and look at the data they receive as being more than voltage or light pulses that represent one or zero.

Friday, October 4, 2013

Network Simulation

Simulation is a very important modern technology. It can be applied to different science, engineering, or other application fields for different purposes. Computer assisted simulation can model hypothetical and real-life objects or activities on a computer so that it can be studied to see how the system function. Different variables can be used to predict the behavior of the system. Computer simulation can be used to assist the modeling and analysis in many natural systems. Typical application areas include physics, chemistry, biology, and human-involved systems in economics, finance or even social science. Other important applications are in the engineering such as civil engineering, structural engineering, mechanical engineering, and computer engineering. Application of simulation technology into networking area such as network traffic simulation,however, is relatively new.

For network simulation, more specifically, it means that the computer assisted simulation technologies are being applied in the simulation of networking algorithms or systems by using software engineering. The application field is narrower than general simulation and it is natural that more specific requirements will be placed on network simulations. For example, the network simulations may put more emphasis on the performance or validity of a distributed protocol or algorithm rather than the visual or real-time visibility features of the simulations.

Moreover, since network technologies is keeping developing very fast and so many different organizations participate in the whole process and they have different technologies or products running on different software on the Internet. That is why the network simulations always require open platforms which should be scalable enough to include different efforts and different packages in the simulations of the whole network. Internet has also a characteristic that it is structured with a uniformed network stack (TCP/IP) that all the different layers technologies can be implemented differently but with a uniformed interface with their neighbored layers. Thus the network simulation tools have to be able to incorporate this feature and allow different future new packages to be included and run transparently without harming existing components or packages. Thus the negative impact of some packages will have no or little impact to the other modules or packages

Thursday, October 3, 2013

Network Setup

Networks – Wired vs. Wireless
The first decision you will need to make about your new network is whether you would like it to be wired or completely wireless. These two methods obviously have their upsides and downsides, but either one is suitable for your business needs.Wired (or Ethernet) networks are said to be extremely reliable, economical, secure, and easy to install. If you have a lot of components you would like to access the Internet with, however, you might opt for a wireless network, which allows you to have broadband access from a distance. Wireless networks have become very easy to install as well, thanks to Wi -Fi. You also eliminate the need for wires or cords in a wireless network, hence the name.

Network Setup – Peer-To-Peer Vs. Client-Server
The next step in how to set up a small business computer network is deciding whether to make it a peer-to-peer setup or a client-server one. Both networks connect computers so that resources can be shared between them. The fundamental differences are in the setup configuration.

In a peer-to-peer setup, every computer acts as both the client and the server. Each computer communicates directly with the other computers in the network and resources can be added or removed. A peer-to-peer setup is much more common in the home.

Equipment You Will Need:
Setting up your network peer-to-peer only requires you to have a router and the necessary Ethernet cords to run the router to the modem and from the router to all of your computers.

Settings You Will Need:
Depending on the operating system your computers may be running on, you should have some built in functions for a network. In Windows, for example, you can opt to put all computers on the same Workgroup (XP) or Homegroup (Windows 7) and enable print/file sharing. The built-in Network Setup Wizard in the control panel will walk you through your setup.

Client-Server Setup:
In a client-server setup, multiple clients (computers) connect to a single, central server. Public data and applications are only installed on the server and the clients connect to the server to use the resources. This type of setup is more typical in larger offices or businesses.

Wednesday, October 2, 2013

Configuration Of RIP Routing Protocol With Routers

How to configure a rip routing protocol with a simple topology to check the connectivity between hosts.


Step 1:Create a topology like this and do basic configurations IP address to the router interfaces ,ip address and default gateway to the host same as in topology.

RIP
                   
In Router R1,
R1(config)#interface fastethernet 2/0
R1(config-if)#ip address 10.0.0.1 255.0.0.0
R1(config-if)#no shutdown
R1(config-if)#exit
R1(config)#interface serial 1/0
R1(config-if)#ip address 20.0.0.1 255.0.0.0
R1(config-if)#clock rate 64000
R1(config-if)#encapsulation ppp
R1(config-if)#no shutdown
R1(config-if)#exit

In Router R2,
R2(config)#interface serial 1/0
R2(config-if)#ip address 20.0.0.2 255.0.0.0
R2(config-if)#encapsulation ppp
R2(config-if)#no shutdown
R2(config-if)#exit
R2(config)#interface fastethernet2/0
R2(config-if)#ip address 30.0.0.1 255.0.0.0
R2(config-if)#no shutdown
R2(config-if)#exit
By default routers know directly connected network.

Step 2:Check the routing table of  the router R1 and R2 by giving the command show ip route in privileged mode,
In Router R1,
R1#show ip route
Gateway of last resort is not set
C    10.0.0.0/8 is directly connected, FastEthernet2/0
C    20.0.0.0/8 is directly connected, Serial1/0
In Router R2,
R1#show ip route
Gateway of last resort is not set

C    30.0.0.0/8 is directly connected, FastEthernet2/0
C    20.0.0.0/8 is directly connected, Serial1/0

Step 3:Now,Run Rip protocols on R1 and R2 .What rip will do is, it will create a routing update by adding directly connected networks information and it will send to neighbor routers
Just add directly connected networks
In Router R1,
R1(config)#router rip
R1(config-router)#network 10.0.0.0
R1(config-router)#network 20.0.0.0
R1(config-router)#exit
In Router R2,
R2(config)#router rip
R2(config-router)#network 20.0.0.0
R2(config-router)#network 30.0.0.0
R2(config-router)#exit
Now routers learns network information automatically through routing updates

Step 4:Now give show ip route command in R1 and R2 and check the routing table
In Router R1,
R1#show ip route
Gateway of last resort is not set
C    10.0.0.0/8 is directly connected, FastEthernet0/0
C    20.0.0.0/8 is directly connected, Serial2/0
R    30.0.0.0/8 [120/1] via 20.0.0.2, 00:00:28, Serial2/0
R1#
Here,network 30.0.0.0 learned by the router R1 via Serial 2/0 by routing update from R2,that will be reachable via 20.0.0.2(next hop).
[120/1]-120 Administrative distance of Rip,1-Reachable on one hop.

In Router R2,
R2#show ip route
Gateway of last resort is not set
R    10.0.0.0/8 [120/1] via 20.0.0.1, 00:00:16, Serial2/0
C    20.0.0.0/8 is directly connected, Serial2/0
C    30.0.0.0/8 is directly connected, FastEthernet0/0

Here,network 30.0.0.0 learned by the router R2 via Serial 2/0 by routing update from R1,that will be reachable via 20.0.0.1(next hop).

Step 5:Now ping from the Host 10.0.0.10 to Host 30.0.0.10 by giving command
ping 30.0.0.10 you will get,


RIP Timers:
Update Interval-30
Invalid Interval -180
Hold Down Interval -180
Flush After-240
Administrative Distance-120