Archives

Category Archive for ‘Network Usage Billing’

How NetFlow Solves for Mandatory Data Retention Compliance

Compliance in IT is not new and laws regulating how organizations should manage their customer data exist such as: HIPPA, PCI, SCADA and Network transaction logging has begun to be required of business. Insurance companies are gearing up to qualify businesses by the information they retain to protect their services and customer information. Government and industry regulations and enforcement are becoming increasingly stringent.

Most recently many countries have begun to implement Mandatory Data Retention laws for telecom service providers.

Governments require a mandatory data retention scheme because more and more crime is moving online from the physical world and ISP‘s are keeping less data and retaining it for a shorter time. This negatively impacts the investigative capabilities of law enforcement and security agencies that need timely information to help save lives by early spotting lone-wolf terrorists or protect vulnerable members of society from abuse by sexual deviants, ransomware or other crimes online.

Although there is no doubt as to the value of mandatory data retention schemes they are not without justifiable privacy, human rights and expense concerns to implement.

It takes a lot of cash, time and skills that many ISP’s and companies simply cannot afford. Internet and managed service providers and large organizations must take proper precautions to remain in compliance. Heavy fines, license and certification issues and other penalties can result from non-compliance with mandatory data retention requirements.

According to the Australian Attorney-General’s Department, Australian telecommunications companies must keep a limited set of metadata for two years. Metadata is information about a communication (the who, when, where and how)—not the content or substance of a communication (the what).

A commentator from the Sydney morning herald qualified“…Security, intelligence and law enforcement access to metadata which overrides personal privacy is now in contention worldwide…” and speculated that with the introduction of Australian metadata laws that “…this country’s entire communications industry will be turned into a surveillance and monitoring arm of at least 21 agencies of executive government. …”.

In Australia many smaller ISP’s are fearful that failing to comply will send them out of business. Internet Australia’s Laurie Patton said, “It’s such a complicated and fundamentally flawed piece of legislation that there are hundreds of ISPs out there that are still struggling to understand what they’ve got to do”.

As for the anticipated costs, a survey sent to ISPs by telecommunications industry lobby group Communications Alliance found that  “There is a huge variance in estimates for the cost to business of implementing data retention – 58 per cent of ISPs say it will cost between $10,000 and $250,000; 24 per cent estimate it will cost over $250,000; 12 per cent think it will cost over $1,000,000; some estimates go as high as $10 million.”

An important cost to consider in compliance is the ease of data reporting when requested by government or corporate compliance teams to produce information for a specific ipv4 or ipv6 address. If the data is stored in a data-warehouse that is difficult to filter this may cause the service provider to incur penalties or be seen to be non-complying. Flexible filtering and automated reporting is therefore critical to produce the forensics required for the compliance in a timely and cost effective manner.

Although there are different laws governing different countries the main requirement of mandatory data retention laws at ISP’s is to maintain sufficient information at a granular level in order to assist governments in finding bad actors such as terrorists, corporate espionage, ransom-ware and pedophiles. In some countries this means that telcos are required to keep data of the IP addresses users connect to, for up to 10 weeks and in others just the totals of subscriber usage for each IP used for up to 2 years.

Although information remains local to each country and governed by relevant privacy laws, the benefits to law enforcement in the future will eventually provide the ability to have the visibility to track relayed data such as communications used by Tor Browsers, Onion routers and Freenet beyond their relay and exit nodes.

There is no doubt in my mind that with heightened states of security and increasing online crime there is a global need for governments to intervene with online surveillance to protect children from exploitation, reduce terrorism and to build defensible infrastructures whilst at the same time implementing data retention systems that have the inbuilt smarts to enable a balance between compliance and privacy rather than just a blanket catch all. There is already an available solution for the Internet communications component based on Netflow that assists ISP’s to quickly comply at a low cost whilst properly allowing data retention rules to be implemented to limit intruding on an individual’s privacy.

NetFlow solutions are cheap to deploy and are not required to be deployed at every interface such as a packet analyzer and can use the existing router, switch or firewall investment to provide continuous network monitoring across the enterprise, providing the service provider or organization with powerful tools for data retention compliance.

NetFlow technology if sufficiently scalable, granular and flexible can deliver on the visibility, accountability and measurability required for data retention because it can include features that:

  • Supply a real-time look at network and host-based activities down to the individual user and device;
  • Increase user accountability for introducing security risks that impact the entire network;
  • Track, measure and prioritize network risks to reduce Mean Time to Know (MTTK) and Mean Time to Repair or Resolve (MTTR);
  • Deliver the data IT staff needs to engage in in-depth forensic analysis related to security events and official requests;
  • Seamlessly extend network and security monitoring to virtual environments;
  • Assist IT departments in maintaining network up-time and performance, including mission critical applications and software necessary to business process integrity;
  • Assess and enhance the efficacy of traditional security controls already in place, including firewalls and intrusion detection systems;
  • Capture and archive flows for complete data retention compliance

Compared to other analysis solutions, NetFlow can fill in the gaps where other technologies cannot deliver. A well-architected NetFlow solution can provide a comprehensive landscape of tools to help business and service providers to achieve and maintain data retention compliance for a wide range of government and industry regulations.

Performance Monitoring & Security Forensics: The 1-2 Punch for Network and IT Infrastructure Visibility

Scalable NetFlow – 3 Key Questions to Ask Your NetFlow Vendor

Why is flows per second a flawed way to measure a netflow collector’s capability?

Flows-per-second is often considered the primary yardstick to measure the capability of a netflow analyzers flow capture (aka collection) rate.

This seems simple on its face. The more flows-per-second that a flow collector can consume, the more visibility it provides, right? Well, yes and no.

The Basics

NetFlow was originally conceived as a means to provide network professionals the data to make sense of the traffic on their network without having to resort to expensive per segment based packet sniffing tools.

A flow record contains at minimum the basic information pertaining to a transfer of data through a router, switch, firewall, packet tap or other network gateway. A typical flow record will contain at minimum: Source IP, Destination IP, Source Port, Destination Port, Protocol, Tos, Ingress Interface and Egress Interface. Flow records are exported to a flow collector where they are ingested and information orientated to the engineers purposes are displayed.

Measurement

Measurement has always been how the IT industry expresses power and competency. However, a formula used to reflect power and ability changes when a technology design undergoes a paradigm shift.

For example, when expressing how fast a computer is we used to measure the CPU clock speed. We believed that the higher the clock speed the more powerful the computer. However, when multi-core chips were introduced the CPU power and speed dropped but the CPU in fact became more powerful. The primary clock speed measurement indicator became secondary to the ability to multi-thread.

The flows-per-second yardstick is misleading as it incorrectly reflects the actual power and capability of a flow collector to capture and process flow data and it has become prone to marketing exaggeration.

Flow Capture Rate

Flow capture rate ability is difficult to measure and to quantify a products scalability. There are various factors that can dramatically impact the ability to collect flows and to retain sufficient flows to perform higher-end diagnostics.

Its important to look not just at flows-per-second but at the granularity retained per minute (flow retention rate), the speed and flexibility of alerting, reporting, forensic depth and diagnostics and the scalability when impacted by high-flow-variance, sudden-bursts, number of devices and interfaces, the speed of reporting over time, the ability to retain short-term and historical collections and the confluence of these factors as it pertains to scalability of the software as a whole.

Scalable NetFlow and flow retention rates are particularly critical to determine as appropriate granularity is needed to deliver the visibility required to perform Anomaly Detection, Network Forensics, Root Cause Analysis, Billing substantiation, Peering Analysis and Data retention compliance.

The higher the flows-per-second and the flow-variance the more challenging it becomes to achieve a high flow-retention-rate to archive and retain flow records in a data warehouse.

A vendors capability statement might reflect a high flows-per-second consumption ability but many flow software tools have retention rate limitations by design.

It can mean that irrespective of achieving a high flow collection rate the netflow analyzer might only be capable of physically archiving 500 flows per minute. Furthermore, these flows are usually the result of sorting the flow data by top bytes to identify Top 10bandwidth abusers. Netflow products of this kind can be easily identified because they often tend to offer benefits orientated primarily to identifying bandwidth abuse or network performance monitoring.

Identifying bandwidth abusers is of course a very important benefit of a netflow analyzer. However, it has a marginal benefit today where a large amount of the abuse and risk is caused by many small flows.

These small flows usually fall beneath the radar screen of many netflow analysis products.  Many abuses like DDoS, p2p, botnets and hacker or insider data exfiltration continue to occur and can at minimum impact the networking equipment and user experience. Lack of ability to quantify and understand small flows creates great risk leaving organizations exposed.

Scalability

This inability to scale in short-term or historical analysis severely impacts a flow monitoring products ability to collect and retain critical information required in todays world where copious data has created severe network blind spots.

To qualify if a tool is really suitable for the purpose, you need to know more about the flows-per-second collection formula being provided by the vendor and some deeper investigation should be carried out to qualify the claims.

 

With this in mind here are 3 key questions to ask your NetFlow vendor to understand what their collection scalability claims really mean:

  1. How many flows can be collected per second?

  • Qualify if the flows per second rate provided is a burst rate or a sustained rate.
  • Ask how the collection and retention rates might be affected if the flows have high-flow variance (e.g. a DDoS attack).
  • How is the collection, archiving and reporting impacted when flow variance is increased by adding many devices and interfaces and distinct IPv4/IPv6 conversations and test what degradation in speed can you expect after it has been recording for some time.
  • Ask how the collection and retention rates might change if adding additional fields or measurements to the flow template (e.g. MPLS, MAC Address, URL, Latency)
  • How many flow records can be retained per minute?

  • Ask how the actual number of records inserted into the data warehouse per minute can be verified for short-term and historical collection.
  • Ask what happens to the flows that were not retained.
  • Ask what the flow retention logic is. (e.g. Top Bytes, First N)
  • What information granularity is retained for both short-term and historically?
    • Does the datas time granularity degrade as the data ages e.g. 1 day data retained per minute, 2 days retained per hour 5 days retained per quarter
    • Can you control the granularity and if so for how long?

 

Remember – Rate of collection does not translate to information retention.

Do you know whats really stored in the software’s database? After all you can only analyze what has been retained (either in memory or on disk) and it is that information retention granularity that provides a flow products benefits.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

NetFlow for Usage-Based Billing and Peering Analysis

Usagebased billing refers to the methods of calculating and passing back the costs of running a network to the consumers of data that occur through the network. Both Internet Service Providers (ISP) and Corporations have a need for Usage-based billing with different billing models.

NetFlow is the ideal technology for usage-based billing because it allows for the capture of all transactional information pertaining to the usage and some smart NetFlow technologies already exist to assist in the counting, allocation, and substantiation of data usage.

Advances in telecommunication technology have enabled ISPs to offer more convenient, streamlined billing options to customers based on bandwidth usage.

One billing model used most commonly by ISPs in the USA is known as the 95th percentile. The ISP filters the samples and disregards the highest 5% in order to establish the bill amount. This is an advantage to data consumers who have bursts of traffic because they’re not financially penalized for exceeding a traffic threshold for brief periods of time. The solution measures traffic employing a five-minute granularity standard typically over the course of a month.

The disadvantage of the 95th percentile model is that its not sustainable business model as data continues to become a utility like electricity.

A second approach is a utility-based metered billing model that involves retaining a tally of all bytes consumed by a customer with some knowledge of data path to allow for premium or free traffic plans.

Metered Internet usage is used in countries like Australia and most recently Canada who have nationally moved away from a 95th percentile model. This approach is also very popular in corporations whose business units share common network infrastructure and who are unwilling to accept “per user” cost, but rather a real consumption-based cost.

Benefits of usage-based billing are:

  • Improved transparency about the cost of services;
  • Costs feedback to the originator;
  • Raised cost sensitivity;
  • Good basis for active cost management;
  • The basis for Internal and external benchmarking;
  • Clear substantiation to increase bandwidth costs;
  • Shared infrastructure costs can also be based on consumption;
  • Network performance improvements.

For corporations, usage-based billing enables the IT department to become a shared service and viewed as a profit center rather than a cost center. It can become viewed as something that’s a benefit and a catalyst for business growth rather than a necessary but expensive line item in the budget.

For ISPs in the USA, there is no doubt that utility-based costs per byte model will continually be contentious as video and TV over Internet usage increases. In other regions, new business models that include packaging of video over “free zones” services have become popular meaning that the cost of premium content provision has fallen onto the content provider making utility billing viable in the USA.

NetFlow tools can include methods for building billing reports and offer a variety of usage-based billing model calculations.

Some NetFlow tools even include an API to allow the chart-of-accounts to be retained and driven from traditional accounting systems using the NetFlow system to focus on the tallying. Grouping algorithms should be flexible within the solution to allow for grouping of all different variables such as interfaces, applications, Quality of Service (QoS), MAC Addresses, MPLS, and IP groups. For ISPs and large corporations Asynchronous Network Numbers (ASN) also allow for analysis of data-paths allowing sensible negotiations with Peering partners and Content partners.

Look out for more discussion on peering in an upcoming blog…

Performance Monitoring & Security Forensics: The 1-2 Punch for Network and IT Infrastructure Visibility

What is NetFlow & How Can Organizations Leverage It?

NetFlow is a feature originally introduced on Cisco devices (but now generally available on many vendor devices) which provides the ability for an organization to monitor and collect IP network traffic entering or exiting an interface.
Through analysis of the data provided by NetFlow, a network administrator is able to detect things such as the source and destination of traffic, class of service, and the causes of congestion on the network.

NetFlow is designed to be utilized either from the software built into a router/switch or from external probes.

The purpose of NetFlow is to provide an organization with information about network traffic flow, both into and out of the device, by analyzing the first packet of a flow and using that packet as the standard for the rest of the flow. It has two variants which are designed to allow for more flexibility when it comes to implementing NetFlow on a network.

NetFlow was originally developed by Cisco around 1990 as a packet switching technology for Cisco routers and implemented in IOS 11.x.

The concept was that instead of having to inspect each packet in a “flow”, the device need only to inspect the first packet and create a “NetFlow switching record” or alternatively named “route cache record”.

After that that record was created, further packets in the same flow would not need to be inspected; they could just be forwarded based on the determination from the first packet. While this idea was forward thinking, it had many drawbacks which made it unsuitable for larger internet backbone routers.

In the end, Cisco abandoned that form of traffic routing in favor of “Cisco Express Forwarding”.

However, Cisco (and others) realized that by collecting and storing / forwarding that “flow data” they could offer insight into the traffic that was traversing the device interfaces.

At the time, the only way to see any information about what IP addresses or application ports were “inside” the traffic was to deploy packet sniffing systems which would sit inline (or connected to SPAN/Mirror) ports and “sniff” the traffic.  This can be an expensive and sometimes difficult solution to deploy.

Instead, by exporting the NetFlow data to an application which could store / process / display the information, network managers could now see many of the key meta-data aspects of traffic without having to deploy the “sniffer” probes.

Routers and switches which are NetFlow-capable are able to collect the IP traffic statistics at all interfaces on which NetFlow is enabled. This information is then exported as NetFlow records to a NetFlow collector, which is typically a server doing the traffic analysis.

There are two main NetFlow variants: Security Event Logging and Standalone Probe-Based Monitoring.

Security Event Logging was introduced on the Cisco ASA 5580 products and utilizes NetFlow v9 fields and templates. It delivers security telemetry in high performance environments and offers the same level of detail in logged events as syslog.

Standalone Probe-Based Monitoring is an alternative to flow collection from routers and switches and uses NetFlow probes, allowing NetFlow to overcome some of the limitations of router-based monitoring. Dedicated probes allow for easier implementation of NetFlow monitoring, but probes must be placed at each link to be observed and probes will not report separate input and output as a router will.

An organization or company may implement NetFlow by utilizing a NetFlow-capable device. However, they may wish to use one of the variants for a more flexible experience.

By using NetFlow, an organization will have insight into the traffic on its network, which may be used to find sources of congestion and improve network traffic flow so that the network is utilized to its full capability.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

Seven Reasons To Analyze Network Traffic With NetFlow

NetFlow allows you to keep an eye on traffic and transactions that occur on your network. NetFlow can detect unusual traffic, a request for a malicious destination or a download of a larger file. NetFlow analysis helps you see what users are doing, gives you an idea of how your bandwidth is used and can help you improve your network besides protecting you from a number of attacks.

There are many reasons to analyze network traffic with NetFlow, including making your system more efficient as well as keeping it safe. Here are some of the reasons behind many organizations  adoption of NetFlow analysis:

  • Analyze all your network NetFlow allows you to keep track of all the connections occurring on your network, including the ones hidden by a rootkit. You can review all the ports and external hosts an IP address connected to within a specific period of time. You can also collect data to get an overview of how your network is used.

 

  • Track bandwidth use. You can use NetFlow to track bandwidth use and see reports on the average use of This can help you determine when spikes are likely to occur so that you can plan accordingly. Tracking bandwidth allows you to better understand traffic patterns and this information can be used to identify any unusual traffic patterns. You can also easily identify unusual surges caused by a user downloading a large file or by a DDoS attack.

 

  • Keep your network safe from DDoS attacks. These attacks target your network by overloading your servers with more traffic than they can handle. NetFlow can detect this type of unusual surge in traffic as well as identify the botnet that is controlling the attack and the infected computers following the botnet’s order and sending traffic to your network. You can easily block the botnet and the network of infected computers to prevent future attacks besides stopping the attack in progress.

 

  • Protect your network from malware. Even the safest network can still be exposed to malware via users connecting from home or via people bringing their mobile device to work. A bot present on a home computer or on a Smartphone could access your network but NetFlow will detect this type of abnormal traffic and with auto-mitigation tools automatically block it.
  • Optimize your cloud. By tracking bandwidth use, NetFlow can show you which applications slow down your cloud and give you an overview of how your cloud is used. You can also track performances to optimize your cloud and make sure your cloud service provider is offering a cloud solution that corresponds to what they advertised.
  • Monitor users. Everyone brings their own Smartphone to work nowadays and might use it for purposes other than work. Company data may be accessible by insiders who have legitimate access but have an inappropriate agenda downloading and sharing sensitive data with outside sources. You can keep track of how much bandwidth is used for data leakage or personal activities, such as using Facebook during work hours.
  • Data Retention Compliance. NetFlow can fill in the gaps where other technologies cannot deliver. A well-architected NetFlow solution can help business and service providers to achieve and maintain data retention compliance for a wide range of government and industry regulations.

NetFlow is an easy way to monitor your network and provides you with several advantages, including making your network safer and collecting the data you need to optimize it. Having access to a comprehensive overview of your network from a single pane of glass makes monitoring your network easy and enables you to check what is going on with your network with a simple glance.

CySight solutions takes the extra step to make life far easier for the network and security professional with smart alerts, actionable network intelligence, scalability and automated diagnostics and mitigation for a complete technology package.

CySight can provide you with the right tools to analyze traffic, monitor your network, protect it and optimize it. Contact us  to learn more about NetFlow and how you can get the most out of this amazing tool.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

3 Key Differences Between NetFlow and Packet Capture Performance Monitoring

The increasing density, complexity and expanse of modern networking environments have fueled the ongoing debate around which network analysis and monitoring tools serve the needs of the modern engineer best – placing Packet Capture and NetFlow Analysis at center-stage of the conversation. Granted, both can be extremely valuable tools in ongoing efforts to maintain and optimize complex environments, but as an engineer, I tend to focus on solutions that give me the insights I need without too much cost on my resources, while complementing my team’s ability to maintain and optimize the environments we support.

So with this in mind, let’s take a look at how NetFlow, in the context of the highly-dense networks we find today, delivers three key requirements network teams rely on for reliable end-to-end performance monitoring of their environments.

A NetFlow deployment won’t drain your resources

Packet Capture, however rich in network metrics, requires sniffing devices and agents throughout the network, which invariably require some level of maintenance during their lifespan. In addition, the amount of space required to store and analyze packet data makes it an inefficient an inelegant method of monitoring or forensic analysis. Combine this with the levels of complexity networks can reach today, and overall cost and maintenance associated with packet sniffers can quickly become unfeasible. In the case of NetFlow, its wide vendor support across virtually the entire networking landscape makes almost every switch, router or firewall a NetFlow “ready” device. Devices’ built-in readiness to capture and export data-rich metrics makes it easy for engineers to deploy and utilize . Also, thanks to its popularity, NetFlow analyzers of varying feature-sets are available for network operations center (NOC) teams to gain full advantage of data-rich packet flows.

Striking the balance between detail and context

Considering how network-dependent and widespread applications have become in recent years, NetFlow’s ability to provide WAN-wide metrics in near real-time makes it a  suitable troubleshooting companion for engineers.   And with version 9 of NetFlow extending the wealth of information it collects via a template-based collection scheme, it strikes the balance between detail and high-level insight without placing too much demand on networking hardware – which is something that can’t be said for Packet Capture. Packet Capture tools, however, do what they do best, which is Deep Packet Inspection (DPI), which allows for the identification of aspects in the traffic hidden in the past to Netflow analyzers. But Netflow’s constant evolution alongside the networking landscape is seeing it used as a complement to solutions such as Cisco’s NBAR and other DPI solutions who have recognized that all they need to do is use flexible Netflow tools to reveal details at the packet level.

NetFlow places your environment in greater context

Context is a chief area where NetFlow beats out Packet Capture since it allows engineers to quickly locate root causes relating to performance by providing a more situational view of the environment, its data-flows, bottleneck-prone segments, application behavior, device sessions and so on. We could argue that packet sniffing is able to provide much of this information too, but it doesn’t give engineers the broader context around the information it presents, thus hamstringing IT teams from detecting performance anomalies that could be subscribed to a number of factors such as untimely system-wide application or operating system updates or a cross-link backup application pulling loads of data across the WAN during operational hours.

So does NetFlow make Packet Capture obsolete?

The short answer is, no. In fact, Packet Capture, when properly coupled with NetFlow, can make a very elegant solution. For example, using NetFlow to identify an attack profile or illicit traffic and then analyzing corresponding raw packets becomes an attractive solution. However, NetFlow strikes that perfect balance between detail and context and gives NOCs intelligent insights that reveals broader factors that can influence your network’s ability to perform. Gartner’s assertion that a balance of 80% NetFlow monitoring  coupled with 20% Packet Capture as the perfect combination of performance monitoring attests to NetFlow’s growing prominence as the monitoring tool of choice. And as it and its various iterations such sFlow, IPFIX and  others continue to expand the breadth of context it provides network engineers, that margin is set to increase in its favor as time.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

Two Ways Networks Are Transformed By NetFlow

According an article in techtarget.com “Your routers and switches can yield a mother lode of information about your network–if you know where to dig.”  The article goes on to say that excavating and searching through endless traffic data and logs manufactured by your network system is a lot like mining for gold, and punching random holes to look for a few nuggets of information isn’t very efficient. Your search will be much more fruitful if you know where to look and what it will look like. Fortunately, the data generated by a NetFlow traffic reporting protocol yields specific information and you can easily sort, view and analyze the information into what you want to use or need.In contemporary networks, there is a need to collect and retain a good set of traffic records for several different purposes. These include the ability to monitor traffic for network planning, security and analysis as well as track traffic usage for billing purposes. Every business experiences network problems. The goal is to transform these “bad behaving” networks by investigating the data that is being generated by the routers, switches and other hardware that make up the system.

  • Trace and repair network misconfigurations

Problems with networks can run the gamut from mismatched applications and hardware to wireless access points opened to accommodate BYOD users and other business uses. While there is always talk about software flaws and news about the latest internet threat, those things often distract IT pros from the real, every-day threat of unstable networks that have been configured to accommodate legacy hardware and a multitude of software applications.

The increasing complexity of the Internet itself, with the interconnection of lots of different devices and device types adds to the challenge of operating a computer network. Even though developing protocols to respond to unpredicted failures and misconfigurations is a workable solution, these out-of-date configurations can still cause frequent problems and denial of service (DOS). With many modern network devices monitoring functions and gathering data, retrieving and utilizing the NetFlow information makes tracing and repairing the problem of misconfigurations possible, easier and efficient.

  • Detect security breaches

There are many uses for NetFlow but one of the most important is the benefit of network security. This quote from an article by Wagner and Bernhard, describing worm and anomaly detection in fast IP networks, bears out the security problems facing governments, businesses, and internet users today.

“Large-scale network events, such as outbreaks of a fast Internet worm are difficult to detect in real-time from observed traffic, even more so if the observed network carries a large amount of traffic. Even during worm outbreaks, actual attack traffic is only a small fraction of the overall traffic. Its precise characteristics are usually unknown beforehand, making direct matching hard. It is desirable to derive parameters that describe large amounts of traffic data in such a way that details are hidden, but the type of changes associated with worm outbreaks and other network events are visible.”

NetFlow provides a 24/7 account of all network activity. There is an “unblinking” eye observing anything and everything that happens within the network boundaries. All the data needed to identify and enact a clean-up is recorded in the flow and this is invaluable to a security pro trying to reduce the impact of a breach in the network. NetFlow provides a visible, “what’s happening right now” view that other systems cannot provide. Most security systems alert after something has been detected, while NetFlow is constantly gathering information even when things seem to be just fine. In addition, NetFlow-based analysis relies on traffic behavior and algorithms which provides rapid detection of breaches that other technologies often miss

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

How Traffic Accounting Keeps You One Step Ahead Of The Competition

IT has steadily evolved from a service and operational delivery mechanism to a strategic business investment. Suffice it to say that the business world and technology have become so intertwined that it’s unsurprising many leading companies within their respective industries attribute their success largely to their adoptive stance toward innovation.

Network Managers know that much of their company’s ability to outmaneuver the competition depends to a large extent on IT Ops’ ability to deliver world-class services. This brings traffic accounting into the conversation, since a realistic and measured view of your current and future traffic flows is central to building an environment in which all the facets involved in its growth, stability and performance are continually addressed.

In this blog, we’ll take a look at how traffic accounting places your network operations center (NOC) team on the front-foot in their objective to optimize the flow of your business’ most precious cargo – its data.

All roads lead to performance baselining 

Performance baselines lay the foundation for network-wide traffic accounting against predetermined environment thresholds. They also aid IT Ops teams in planning for network growth and expansion undertakings. Baseline information typically contains statistics on network utilization, traffic components, conversation and address statistics, packet information and key device metrics.

It serves as your network’s barometer by informing you when anomalies such as excessive bandwidth consumption and other causes of bottlenecks occur. For example, root causes to performance issues can easily creep into an environment unnoticed: such as a recent update to a business critical application that may cause significant spikes in network utilization.  Armed with a comprehensive set of baseline statistics and data that allows Network Performance and Security Specialists to measure, compare and analyze network metrics,   root causes such as these can be identified with elevated efficiency.

In broader applications, baselining gives Network Engineers a high-level view of their environments, thereby allowing them to configure Quality of Service (QoS) parameters, plan for upgrades and expansions, detect and monitor trends and peering analysis and a bevy of other functions.

Traffic accounting brings your future network into focus

With new-generation technologies such as the cloud, resource virtualization, as a service platforms and mobility revolutionizing the networks of yesteryear, capacity planning has taken on a new level of significance. Network monitoring systems (NMS) need to meet the demands of the new, complex, hybrid systems that are the order of the day. Thankfully, technologies such as NetFlow have evolved steadily over the years to address the monitoring demands of modern networks. NetFlow accounting is a reliable way to peer through the wire and get a deeper insight to the traffic that traverses your environment. Many Network Engineers and Security Specialists will agree that their understanding of their environments hinges on the level of insight they glean from their monitoring solutions.

This makes NetFlow an ideal traffic accounting medium, since it easily collects and exports data from virtually any connected device for analysis by a CySight . The technology’s standing in the industry has made it the “go-to” solution for curating detailed, insightful and actionable metrics that move IT organizations from a reactive to proactive stance towards network optimization

Traffic accounting’s influence on business productivity and performance

As organizations become increasingly technology-centric in their business strategies, their reliance on networks that consistently perform at peak will increase accordingly. This places new pressures on Network Performance and Security Teams  to conduct iterative performance and capacity testing to contextualize their environment’s ability to perform when it matters most. NetFlow’s ability to provide contextual insights based on live and historic data means Network Operation Centers (NOCs)  are able to react to immediate performance hindrances and also predict with a fair level of accuracy what the challenges of tomorrow may hold. And this is worth gold in the context of the ever-changing and expanding networking landscape.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health