Archives

Category Archive for ‘BYOD’

How NetFlow Solves for Mandatory Data Retention Compliance

Compliance in IT is not new and laws regulating how organizations should manage their customer data exist such as: HIPPA, PCI, SCADA and Network transaction logging has begun to be required of business. Insurance companies are gearing up to qualify businesses by the information they retain to protect their services and customer information. Government and industry regulations and enforcement are becoming increasingly stringent.

Most recently many countries have begun to implement Mandatory Data Retention laws for telecom service providers.

Governments require a mandatory data retention scheme because more and more crime is moving online from the physical world and ISP‘s are keeping less data and retaining it for a shorter time. This negatively impacts the investigative capabilities of law enforcement and security agencies that need timely information to help save lives by early spotting lone-wolf terrorists or protect vulnerable members of society from abuse by sexual deviants, ransomware or other crimes online.

Although there is no doubt as to the value of mandatory data retention schemes they are not without justifiable privacy, human rights and expense concerns to implement.

It takes a lot of cash, time and skills that many ISP’s and companies simply cannot afford. Internet and managed service providers and large organizations must take proper precautions to remain in compliance. Heavy fines, license and certification issues and other penalties can result from non-compliance with mandatory data retention requirements.

According to the Australian Attorney-General’s Department, Australian telecommunications companies must keep a limited set of metadata for two years. Metadata is information about a communication (the who, when, where and how)—not the content or substance of a communication (the what).

A commentator from the Sydney morning herald qualified“…Security, intelligence and law enforcement access to metadata which overrides personal privacy is now in contention worldwide…” and speculated that with the introduction of Australian metadata laws that “…this country’s entire communications industry will be turned into a surveillance and monitoring arm of at least 21 agencies of executive government. …”.

In Australia many smaller ISP’s are fearful that failing to comply will send them out of business. Internet Australia’s Laurie Patton said, “It’s such a complicated and fundamentally flawed piece of legislation that there are hundreds of ISPs out there that are still struggling to understand what they’ve got to do”.

As for the anticipated costs, a survey sent to ISPs by telecommunications industry lobby group Communications Alliance found that  “There is a huge variance in estimates for the cost to business of implementing data retention – 58 per cent of ISPs say it will cost between $10,000 and $250,000; 24 per cent estimate it will cost over $250,000; 12 per cent think it will cost over $1,000,000; some estimates go as high as $10 million.”

An important cost to consider in compliance is the ease of data reporting when requested by government or corporate compliance teams to produce information for a specific ipv4 or ipv6 address. If the data is stored in a data-warehouse that is difficult to filter this may cause the service provider to incur penalties or be seen to be non-complying. Flexible filtering and automated reporting is therefore critical to produce the forensics required for the compliance in a timely and cost effective manner.

Although there are different laws governing different countries the main requirement of mandatory data retention laws at ISP’s is to maintain sufficient information at a granular level in order to assist governments in finding bad actors such as terrorists, corporate espionage, ransom-ware and pedophiles. In some countries this means that telcos are required to keep data of the IP addresses users connect to, for up to 10 weeks and in others just the totals of subscriber usage for each IP used for up to 2 years.

Although information remains local to each country and governed by relevant privacy laws, the benefits to law enforcement in the future will eventually provide the ability to have the visibility to track relayed data such as communications used by Tor Browsers, Onion routers and Freenet beyond their relay and exit nodes.

There is no doubt in my mind that with heightened states of security and increasing online crime there is a global need for governments to intervene with online surveillance to protect children from exploitation, reduce terrorism and to build defensible infrastructures whilst at the same time implementing data retention systems that have the inbuilt smarts to enable a balance between compliance and privacy rather than just a blanket catch all. There is already an available solution for the Internet communications component based on Netflow that assists ISP’s to quickly comply at a low cost whilst properly allowing data retention rules to be implemented to limit intruding on an individual’s privacy.

NetFlow solutions are cheap to deploy and are not required to be deployed at every interface such as a packet analyzer and can use the existing router, switch or firewall investment to provide continuous network monitoring across the enterprise, providing the service provider or organization with powerful tools for data retention compliance.

NetFlow technology if sufficiently scalable, granular and flexible can deliver on the visibility, accountability and measurability required for data retention because it can include features that:

  • Supply a real-time look at network and host-based activities down to the individual user and device;
  • Increase user accountability for introducing security risks that impact the entire network;
  • Track, measure and prioritize network risks to reduce Mean Time to Know (MTTK) and Mean Time to Repair or Resolve (MTTR);
  • Deliver the data IT staff needs to engage in in-depth forensic analysis related to security events and official requests;
  • Seamlessly extend network and security monitoring to virtual environments;
  • Assist IT departments in maintaining network up-time and performance, including mission critical applications and software necessary to business process integrity;
  • Assess and enhance the efficacy of traditional security controls already in place, including firewalls and intrusion detection systems;
  • Capture and archive flows for complete data retention compliance

Compared to other analysis solutions, NetFlow can fill in the gaps where other technologies cannot deliver. A well-architected NetFlow solution can provide a comprehensive landscape of tools to help business and service providers to achieve and maintain data retention compliance for a wide range of government and industry regulations.

Performance Monitoring & Security Forensics: The 1-2 Punch for Network and IT Infrastructure Visibility

Big Data – A Global Approach To Local Threat Detection

From helping prevent loss of life in the event of a natural disaster, to aiding marketing teams in designing more targeted strategies to reach new customers, big data seems to be the chief talking point amongst a broad and diverse circle of professionals.

For Security Engineers, big data analytcs is proving to be an effective defense against evolving network intrusions thanks to the delivery of near real-time insights based on high volumes of diverse network data. This is largely thanks to technological advances that have resulted in the capacity to transmit, capture, store and analyze swathes of data through high-powered and relatively low-cost computing systems.

In this blog, we’ll take a look at how big data is bringing deeper visibility to security teams as environments increase in complexity and our reliance on pervading network systems intensifies.

Big data analysis is providing answers to the data deluge dilemma

Large environments generate gigabytes of raw user, application and device metrics by the minute, leaving security teams stranded in a deluge of data. Placing them further on the back foot is the need to sift through this data, which involves considerable resources that at best only provide a retrospective view on security breaches.

Big data offers a solution to the issue of “too much data too fast” through the rapid analysis of swathes of disparate metrics through advanced and evolving analytical platforms. The result is actionable security intelligence, based on comprehensive datasets, presented in an easy-to-consume format that not only provides historic views of network events, but enables security teams to better anticipate threats as they evolve.

In addition, big data’s ability to facilitate more accurate predictions on future events is a strong motivating factor for the adoption of the discipline within the context of information security.

Leveraging big data to build the secure networks of tomorrow

As new technologies arrive on the scene, they introduce businesses to new opportunities – and vulnerabilities. However, the application of Predictive AI Baselining analytics to network security in the context of the evolving network is helping to build the secure, stable and predictable networks of tomorrow. Detecting modern, more advanced threats requires big data capabilities from incumbent intrusion prevention and detection (IDS\IPS) solutions to distinguish normal traffic from potential threats.

By contextualizing diverse sets of data, Security Engineers can more effectively detect stealthily designed threats that traditional monitoring methodologies often fail to pick up. For example, Advanced Persistent Threats (APT) are notorious for their ability to go undetected by masking themselves as day-to-day network traffic. These low visibility attacks can occur over long periods of time and on separate devices, making them difficult to detect since no discernible patterns arise from their activities through the lens of traditional monitoring systems.

Big data Predictive AI Baselining analytics lifts the veil on threats that operate under the radar of traditional signature and log-based security solutions by contextualizing traffic and giving NOCs a deeper understanding of the data that traverses the wire.

Gartner states that, “Big data Predictive AI Baselining analytics enables enterprises to combine and correlate external and internal information to see a bigger picture of threats against their enterprises.”  It also eliminates the siloed approach to security monitoring by converging network traffic and organizing it in a central data repository for analysis; resulting in much needed granularity for effective intrusion detection, prevention and security forensics.

In addition, Predictive AI Baselining analytics eliminates barriers to internal collaborations between Network, Security and Performance Engineers by further contextualizing network data that traditionally acted as separate pieces of a very large puzzle.

So is big data Predictive AI Baselining analytics the future of network monitoring?

In a way, NOC teams have been using big data long before the discipline went mainstream. Large networks have always produced high volumes of data at high speeds – only now, that influx has intensified exponentially.

Thankfully, with the rapid evolution of computing power at relatively low cost, the possibilities of what our data can tell us about our networks are becoming more apparent.

The timing couldn’t have been more appropriate since traditional perimeter-based IDS\IPS no longer meet the demands of modern networks that span vast geographical areas with multiple entry points.

In the age of cloud, mobility, ubiquitous Internet and the ever-expanding enterprise environment, big data capabilities will and should become an intrinsic part of virtually every security apparatus.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

NetFlow for Usage-Based Billing and Peering Analysis

Usagebased billing refers to the methods of calculating and passing back the costs of running a network to the consumers of data that occur through the network. Both Internet Service Providers (ISP) and Corporations have a need for Usage-based billing with different billing models.

NetFlow is the ideal technology for usage-based billing because it allows for the capture of all transactional information pertaining to the usage and some smart NetFlow technologies already exist to assist in the counting, allocation, and substantiation of data usage.

Advances in telecommunication technology have enabled ISPs to offer more convenient, streamlined billing options to customers based on bandwidth usage.

One billing model used most commonly by ISPs in the USA is known as the 95th percentile. The ISP filters the samples and disregards the highest 5% in order to establish the bill amount. This is an advantage to data consumers who have bursts of traffic because they’re not financially penalized for exceeding a traffic threshold for brief periods of time. The solution measures traffic employing a five-minute granularity standard typically over the course of a month.

The disadvantage of the 95th percentile model is that its not sustainable business model as data continues to become a utility like electricity.

A second approach is a utility-based metered billing model that involves retaining a tally of all bytes consumed by a customer with some knowledge of data path to allow for premium or free traffic plans.

Metered Internet usage is used in countries like Australia and most recently Canada who have nationally moved away from a 95th percentile model. This approach is also very popular in corporations whose business units share common network infrastructure and who are unwilling to accept “per user” cost, but rather a real consumption-based cost.

Benefits of usage-based billing are:

  • Improved transparency about the cost of services;
  • Costs feedback to the originator;
  • Raised cost sensitivity;
  • Good basis for active cost management;
  • The basis for Internal and external benchmarking;
  • Clear substantiation to increase bandwidth costs;
  • Shared infrastructure costs can also be based on consumption;
  • Network performance improvements.

For corporations, usage-based billing enables the IT department to become a shared service and viewed as a profit center rather than a cost center. It can become viewed as something that’s a benefit and a catalyst for business growth rather than a necessary but expensive line item in the budget.

For ISPs in the USA, there is no doubt that utility-based costs per byte model will continually be contentious as video and TV over Internet usage increases. In other regions, new business models that include packaging of video over “free zones” services have become popular meaning that the cost of premium content provision has fallen onto the content provider making utility billing viable in the USA.

NetFlow tools can include methods for building billing reports and offer a variety of usage-based billing model calculations.

Some NetFlow tools even include an API to allow the chart-of-accounts to be retained and driven from traditional accounting systems using the NetFlow system to focus on the tallying. Grouping algorithms should be flexible within the solution to allow for grouping of all different variables such as interfaces, applications, Quality of Service (QoS), MAC Addresses, MPLS, and IP groups. For ISPs and large corporations Asynchronous Network Numbers (ASN) also allow for analysis of data-paths allowing sensible negotiations with Peering partners and Content partners.

Look out for more discussion on peering in an upcoming blog…

Performance Monitoring & Security Forensics: The 1-2 Punch for Network and IT Infrastructure Visibility

How to counter-punch botnets, viruses, ToR & more with Netflow [Pt 1]

You can’t secure what you can’t see and you don’t know what you don’t know.

Many network and security professionals assume that they can simply analyze data captured using their standard security devices like firewalls and intrusion detection systems, however they quickly discover their limitations as these devices are not designed for and cannot record and report on every transaction due to lack of granularity, scalability and historic data retention. Network devices like routers, switches, Wi-Fi or VMware servers also typically lack any sophisticated anti-virus software.

Presenting information in a manner that quickly enables security teams to act with simple views with deep contextual data supporting the summaries is the mark of a well constructed traffic analyzer ensuring teams are not bogged down by the detail unless required and even then allowing elegant means to extract forensics with simple but powerful visuals to enable quick contextual grasp and impact of a security event.

Using NetFlow Correlation to Detect intrusions  

Host Reputation is one of the best detection methods that can be used against Advanced Persistent Threats. There are many data sources to choose from and some are more comprehensive than others.

Today these blacklists are mostly IPv4 and Domain orientated designed to be used primarily by firewalls, network intrusion systems and antivirus software.

They can also be used in NetFlow systems very successfully as long as the selected flow technology can scale to support the thousands of known compromised end-points, the ability to frequently update the threat data and the ability to record the full detail of every compromised flow and subsequent conversations that communicate with the compromised systems to discover other related breaches that may have occurred or are occurring.

According to Mike Schiffman at Cisco,

“If a given IP address is known to be that of a spammer or a part of a botnet army it can be flagged in one of the ill repute databases … Since these databases are all keyed on IP address, NetFlow data can be correlated against them and subsequent malicious traffic patterns can be observed, blocked, or flagged for further action. This is NetFlow Correlation.“

The kind of data can we expect to find in the reputation databases are IP addresses that have known to be acting in some malicious or negative manner such as being seen by multiple global honeypots. Some have been identified to be part of a well-known botnet such as Palevo or Zeus whilst other IP’s are known to have been distributing Malware or Trojans. Many kinds of lists can be useful to correlate such as known ToR end points or Relays that have become particularly risky of late being a common means to introduce RansomWare and should certainly not be seen conversing to any host within a corporate, government or other sensitive environment.

Using a tool like CySight’s advanced End-Point Threat Detection allows NetFlow data to be correlated against hundreds of thousands of IP addresses of questionable reputation including ToR exits and relays in real-time with comprehensive historical forensics that can be deployed in a massively parallel architecture.

As a trusted source of deep network insights built on big data analysis capabilities, Netflow provides NOCs with an end-to-end security and performance monitoring and management solution. For more information on Netflow as a performance and security solution for large-scale environments, download our free Guide to Understanding Netflow.

Cutting-edge and innovative technologies like CySight delivers the deep end-to-end network visibility and security context required assisting in speedily impeding harmful attacks.

Performance Monitoring & Security Forensics: The 1-2 Punch for Network and IT Infrastructure Visibility

NetFlow for Advanced Threat Detection

These networks are vital assets to the business and require absolute protection against unauthorized access, malicious programs, and degradation of performance of the network. It is no longer enough to only use Anti-Virus applications.

By the time malware is detected and those signatures added to the antiviral definitions, access is obtained and havoc wreaked or the malware is buried itself inside the network and is obtaining data and passwords for later exploitation.

An article by Drew Robb in eSecurity Planet on September 3, 2015 (https://www.esecurityplanet.com/network-security/advanced-threat-detection-buying-guide-1.html) cited the Verizon 2015 Data Breach Investigations Report where 70 respondents reported over 80,000 security incidents which led to more than 2000 serious breaches in one year.

The report noted that phishing is commonly used to gain access and the malware  then accumulates passwords and account numbers and learns the security defenses before launching an attack.  A telling remark was made, “It is abundantly clear that traditional security solutions are increasingly ineffectual and that vendor assurances are often empty promises,” said Charles King, an analyst at Pund-IT. “Passive security practices like setting and maintaining defensive security perimeters simply don’t work against highly aggressive and adaptable threat sources, including criminal organizations and rogue states.”

So what can businesses do to protect themselves? How can they be proactive in addition to the passive perimeter defenses?

The very first line of defense is better education of users. In one test, an e-mail message was sent to the users, purportedly from the IT department, asking for their passwords in order to “upgrade security.” While 52 people asked the IT department if this was a real request, 110 mailed their passwords right back. In their attempts to be productive, over half of the recipients of phishing e-mails responded within an hour!

Another method of advanced threat protection is NetFlow Monitoring.

IT department and Managed service providers (MSP’s), can use monitoring capabilities to detect, prevent, and report adverse effects on the network.

Traffic monitoring, for example, watches the flow of information and data traversing critical nodes and network links. Without using intrusive probes, this information helps decipher how applications are using the network and which ones are becoming bandwidth hogs. These are then investigated further to determine what is causing the problem and how best to manage the issue. Just adding more bandwidth is not the answer!

IT departments review this data to investigate which personnel are the power users of which applications, when the peak traffic times are and why, and similar information in addition to flagging and diving in-depth to review anomalies that indicate a potential problem.

If there are critical applications or services that the clients rely on for key account revenue streams, IT can provide real-time monitoring and display of the health of the networks supporting those applications and services. It is this ability to observe, analyze, and report on the network health and patterns of usage that provides the ability to make better decisions at the speed of business that CIO’s crave.

CySight excels at network Predictive AI Baselining analytics solutions. It scales to collect, analyze, and report on Netflow datastreams of over one million flows/second. Their team of specialists have prepped, installed, and deployed over 1000 CySight performance monitoring solutions, including over 50 Fortune 1000 companies and some of the largest ISP/Telco’s in the world. A global leader and recognized by winning awards for Security and Business Intelligence at the World Congress of IT, CySight is also welcomed by Cisco as a Technology Development Partner.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

Balancing Granularity Against Network Security Forensics

With the pace at which the social, mobile, analytics and cloud (SMAC) stack is evolving, IT departments must quickly adopt their security monitoring and prevention strategies to match the ever-changing networking landscape. By the same token, network monitoring solutions (NMS) developers must balance a tightrope of their own in terms of providing the detail and visibility their users need, without a cost to network performance. But much of security forensics depends on the ability to drill down into both live and historic data to identify how intrusions and attacks occur. This leads to the question: what is the right balance between collecting enough data to gain the front foot in network security management, and ensuring performance isn’t compromised in the process?

Effectively identifying trends will largely depend on the data you collect

Trend and pattern data tell Security Operations Center (SOC) staff much about their environments by allowing them to connect the dots in terms of how systems may have become compromised. However, collecting large portions of historic data requires the capacity to house it – something that can quickly become problematic for IT Departments. Netflow data analysis acts as a powerful counterweight to the problem of processing and storing chunks of data, since it collects compressed header information that is far less resource-intensive than entire packets or investigating entire device log files, for example. Also, log files are often hackers’ first victims by way of deletion or corruption as a means to disguise attacks or intrusions. With NetFlow Auditor’s ability to collect vast quantities of uncompromised transaction data without exhausting device resources, SOCs are able to perform detailed analyses on flow information that could reveal security issues such as data leaks that occur over time. Taking into account that Netflow security monitoring can easily be configured on most devices, and pervasive security monitoring becomes relatively easy to configure in large environments.

Netflow security monitoring can give SOCs real-time security metrics

Netflow, when retained at high granularity, can facilitate seamless detection of traffic anomalies as they occur and when coupled with smart network behavior anomaly detection (NBAD), can alert engineers when data traverses the wire in an abnormal way – allowing for both quick detection and containment of compromised devices or entire segments. Network intrusions are typically detected when data traverses the environment in an unusual way and compromised devices experience spikes in multiple network telemetry metrics. As malicious software attempts to siphon information from systems, the resultant increase in out-of-the-norm activity will trigger warnings that can bring SOC teams in the loop of what is happening. IdeaData’s NetFlow Auditor employs machine learning that continuously compares multi-metric baselines against current network activity and quickly picks up on anomalies overlooked by other flow solutions, even before they constitute a system-wide threat. This type of behavioral analysis of network traffic places security teams on the front foot in the ongoing battle against malicious attacks on their systems.

Network metrics are being generated on a big data scale

Few things can undermine a network’s performance and risk more than a monitoring solution that strains to provide anticipated visibility. However, considering the increasing complexity of distributed connected assets and the ways and speed in which people and IoT devices are being plugged into networks today, pervasive and detailed monitoring is absolutely crucial. Take the bring your own device (BYOD) phenomenon and the shift to the cloud, for example. Networking and security teams need visibility into where, when, and how mobile phones, tablets, smart watches, and IoT devices are going on and offline and how to better manage the flow of data to and from user devices. Mobile devices increasingly run their own versions of business applications and with BYOD cultures somewhat undermining IT’s ability to dictate the type of software allowed to run on personal devices, the need to monitor traffic flow from such devices – from both a security and a performance perspective – becomes clear.

General Netflow performance analytics tools are capable of informing NOC teams about how large IP traffic flows between devices, with basic usage statistics on a device or segment level. However, when network metrics are generated on a big data scale, traffic anomalies that require SOC investigation get lost in leaky bucket sorting algorithms of basic tools. Detecting the real underlying reasons for traffic degradation or identifying risky communications such as Ransomware, DDoS, slowDoS, peer-to-peer (p2p), the dark web (ToR), and having complete historical visibility to trackback undesirable applications become absolutely critical, but far less difficult, with NetFlow Auditor’s ability to easily provide information on all of the traffic that traverses the environment.

NetFlow security monitoring evolves alongside technology organically

Thanks to Netflow and the unique design and multi-metric approach that IdeaData has implemented, as systems evolve at an increasing rate, it doesn’t mean you need to re-invent your security apparatus every six months or so. NetFlow Auditor’s ubiquity, reliability, and flexibility give NOC and SOC teams deep visibility minus the administrative overheads in getting it up and running along with collecting and benefiting from big flow data’s deep insights. You can even fine-tune your monitoring to give you the right granularity you need to keep your systems safe, secure, and predictable. This results in fewer network blind spots that often act as the Achilles Heel of the modern security and network experts.

On the other end of the scale, Netflow analyzers – in their varying feature sets – give NOCs some basic ability to collect, analyze, and detect from within-the-top bandwidth metrics which some engineers may still believe is the most pertinent to their needs. Once you’ve decided on the data you need today whilst keeping an eye on what you need tomorrow, it’s now time to choose the collector that does the job best.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

What is NetFlow & How Can Organizations Leverage It?

NetFlow is a feature originally introduced on Cisco devices (but now generally available on many vendor devices) which provides the ability for an organization to monitor and collect IP network traffic entering or exiting an interface.
Through analysis of the data provided by NetFlow, a network administrator is able to detect things such as the source and destination of traffic, class of service, and the causes of congestion on the network.

NetFlow is designed to be utilized either from the software built into a router/switch or from external probes.

The purpose of NetFlow is to provide an organization with information about network traffic flow, both into and out of the device, by analyzing the first packet of a flow and using that packet as the standard for the rest of the flow. It has two variants which are designed to allow for more flexibility when it comes to implementing NetFlow on a network.

NetFlow was originally developed by Cisco around 1990 as a packet switching technology for Cisco routers and implemented in IOS 11.x.

The concept was that instead of having to inspect each packet in a “flow”, the device need only to inspect the first packet and create a “NetFlow switching record” or alternatively named “route cache record”.

After that that record was created, further packets in the same flow would not need to be inspected; they could just be forwarded based on the determination from the first packet. While this idea was forward thinking, it had many drawbacks which made it unsuitable for larger internet backbone routers.

In the end, Cisco abandoned that form of traffic routing in favor of “Cisco Express Forwarding”.

However, Cisco (and others) realized that by collecting and storing / forwarding that “flow data” they could offer insight into the traffic that was traversing the device interfaces.

At the time, the only way to see any information about what IP addresses or application ports were “inside” the traffic was to deploy packet sniffing systems which would sit inline (or connected to SPAN/Mirror) ports and “sniff” the traffic.  This can be an expensive and sometimes difficult solution to deploy.

Instead, by exporting the NetFlow data to an application which could store / process / display the information, network managers could now see many of the key meta-data aspects of traffic without having to deploy the “sniffer” probes.

Routers and switches which are NetFlow-capable are able to collect the IP traffic statistics at all interfaces on which NetFlow is enabled. This information is then exported as NetFlow records to a NetFlow collector, which is typically a server doing the traffic analysis.

There are two main NetFlow variants: Security Event Logging and Standalone Probe-Based Monitoring.

Security Event Logging was introduced on the Cisco ASA 5580 products and utilizes NetFlow v9 fields and templates. It delivers security telemetry in high performance environments and offers the same level of detail in logged events as syslog.

Standalone Probe-Based Monitoring is an alternative to flow collection from routers and switches and uses NetFlow probes, allowing NetFlow to overcome some of the limitations of router-based monitoring. Dedicated probes allow for easier implementation of NetFlow monitoring, but probes must be placed at each link to be observed and probes will not report separate input and output as a router will.

An organization or company may implement NetFlow by utilizing a NetFlow-capable device. However, they may wish to use one of the variants for a more flexible experience.

By using NetFlow, an organization will have insight into the traffic on its network, which may be used to find sources of congestion and improve network traffic flow so that the network is utilized to its full capability.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

Seven Reasons To Analyze Network Traffic With NetFlow

NetFlow allows you to keep an eye on traffic and transactions that occur on your network. NetFlow can detect unusual traffic, a request for a malicious destination or a download of a larger file. NetFlow analysis helps you see what users are doing, gives you an idea of how your bandwidth is used and can help you improve your network besides protecting you from a number of attacks.

There are many reasons to analyze network traffic with NetFlow, including making your system more efficient as well as keeping it safe. Here are some of the reasons behind many organizations  adoption of NetFlow analysis:

  • Analyze all your network NetFlow allows you to keep track of all the connections occurring on your network, including the ones hidden by a rootkit. You can review all the ports and external hosts an IP address connected to within a specific period of time. You can also collect data to get an overview of how your network is used.

 

  • Track bandwidth use. You can use NetFlow to track bandwidth use and see reports on the average use of This can help you determine when spikes are likely to occur so that you can plan accordingly. Tracking bandwidth allows you to better understand traffic patterns and this information can be used to identify any unusual traffic patterns. You can also easily identify unusual surges caused by a user downloading a large file or by a DDoS attack.

 

  • Keep your network safe from DDoS attacks. These attacks target your network by overloading your servers with more traffic than they can handle. NetFlow can detect this type of unusual surge in traffic as well as identify the botnet that is controlling the attack and the infected computers following the botnet’s order and sending traffic to your network. You can easily block the botnet and the network of infected computers to prevent future attacks besides stopping the attack in progress.

 

  • Protect your network from malware. Even the safest network can still be exposed to malware via users connecting from home or via people bringing their mobile device to work. A bot present on a home computer or on a Smartphone could access your network but NetFlow will detect this type of abnormal traffic and with auto-mitigation tools automatically block it.
  • Optimize your cloud. By tracking bandwidth use, NetFlow can show you which applications slow down your cloud and give you an overview of how your cloud is used. You can also track performances to optimize your cloud and make sure your cloud service provider is offering a cloud solution that corresponds to what they advertised.
  • Monitor users. Everyone brings their own Smartphone to work nowadays and might use it for purposes other than work. Company data may be accessible by insiders who have legitimate access but have an inappropriate agenda downloading and sharing sensitive data with outside sources. You can keep track of how much bandwidth is used for data leakage or personal activities, such as using Facebook during work hours.
  • Data Retention Compliance. NetFlow can fill in the gaps where other technologies cannot deliver. A well-architected NetFlow solution can help business and service providers to achieve and maintain data retention compliance for a wide range of government and industry regulations.

NetFlow is an easy way to monitor your network and provides you with several advantages, including making your network safer and collecting the data you need to optimize it. Having access to a comprehensive overview of your network from a single pane of glass makes monitoring your network easy and enables you to check what is going on with your network with a simple glance.

CySight solutions takes the extra step to make life far easier for the network and security professional with smart alerts, actionable network intelligence, scalability and automated diagnostics and mitigation for a complete technology package.

CySight can provide you with the right tools to analyze traffic, monitor your network, protect it and optimize it. Contact us  to learn more about NetFlow and how you can get the most out of this amazing tool.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

3 Ways Anomaly Detection Enhances Network Monitoring

With the increasing abstraction of IT services beyond the traditional server room computing environments have evolved to be more efficient and also far more complex. Virtualization, mobile device technology, hosted infrastructure, Internet ubiquity and a host of other technologies are redefining the IT landscape.

From a cybersecurity standpoint, the question is how to best to manage the growing complexity of environments and changes in network behavior with every introduction of new technology.

In this blog, we’ll take a look at how anomaly detection-based systems are adding an invaluable weapon to Security Analysts’ arsenal in the battle against known – and unknown – security risks that threaten the stability of today’s complex enterprise environments.

Put your network traffic behavior into perspective

By continually analyzing traffic patterns at various intersections and time frames, performance and security baselines can be established, against which potential malicious activity is monitored and managed. But with large swathes of data traversing the average enterprise environment at any given moment, detecting abnormal network behavior can be difficult.

Through filtering techniques and algorithms based on live and historical data analysis, anomaly detection systems are capable of detecting even the most subtly crafted malicious software that may pose as normal network behavior. Also, anomaly-based systems employ machine-learning capabilities to learn about new traffic as it is introduced and provide greater context to how data traverses the wire, thus increasing its ability to identify security threats as they are introduced.

Netflow is a popular tool used in the collection of network traffic for building accurate performance and cybersecurity baselines with which to establish normal network activity patterns from potentially alarming network behavior.

Anomaly detection places Security Analysts on the front foot

An anomaly is defined as an action or event that is outside of the norm. But when a definition of what is normal is absent, loopholes can easily be exploited. This is often the case with signature-based detection systems that rely on a database of pre-determined virus signatures that are based on known threats. In the event of a new and yet unknown security threat, signature-based systems are only as effective as their ability to respond to, analyze and neutralize such new threats.

Since signatures do work well against known attacks, they are by no means paralyzed against defending your network. Signature-based systems lack the flexibility of anomaly-based systems in the sense that they are incapable of detecting new threats. This is one of the reasons signature-based systems are typically complemented by some iteration of a flow based anomaly detection system.

Anomaly based systems are designed to grow alongside your network

The chief strength behind anomaly detection systems is that they allow Network Operation Centers (NOCs) to adapt their security apparatus according to the demands of the day. With threats growing in number and sophistication, detection systems that can discover, learn about and provide preventative methodologies  are the ideal tools with which to combat the cybersecurity threats of tomorrow. NetFlow Anomaly detection with automated diagnostics does exactly this by employing machine learning techniques to network threat detection and in so doing, automating much of the detection aspect of security management while allowing Security Analysts to focus on the prevention aspect in their ongoing endeavors to secure their information and technological investments.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

Identifying ToR threats without De-Anonymizing

Part 3 in our series on How to counter-punch botnets, viruses, ToR and more with Netflow focuses on ToR threats to the enterprise.

ToR (aka Onion routing) and anonymized p2p relay services such as Freenet is where we can expect to see many more attacks as well as malevolent actors who are out to deny your service or steal your valuable data. Its useful to recognize that flow Predictive AI Baselining analytics provides the best and cheapest means of de-anonymizing or profiling this traffic.

“The biggest threat to the Tor network, which exists by design, is its vulnerability to traffic confirmation or correlation attacks. This means that if an attacker gains control over many entry and exit relays, they can perform statistical traffic analysis to determine which users visited which websites.” (source)

According to a paper entitled “On the Effectiveness of Traffic Analysis Against Anonymity Networks Using Flow Records” by Sambuddho Chakravarty, Marco V. Barbera,, Georgios Portokalidis, Michalis Polychronakis, and Angelos D. Keromytis they point out that in the lab they can qualify that “81 Percent of Tor Users Can Be Hacked with Traffic Analysis Attack”.

It continues to be a cat and mouse game that requires both new innovative approaches to find ToR weaknesses coupled with correlation attacks to identify routing paths. To do this in real life is becoming much simpler but the real challenge is that it requires cooperation and coordination of business, ISPs and governments. The deployment of cheap and easy to deploy micro-taps that can act both as a ToR relay and a flow exporter concurrently combined with a NetFlow toolset that can scale hierarchically to analyze flow data with path analysis at each point in parallel across a multitude of ToR relays can make this task easy and cost effective.

So what can we do about ToR today?

Even without de-anonymizing ToR traffic there is a lot of intelligence that can be gained simply by analyzing ToR Exit and relay behavior. Using a flow tool that can change perspectives between flows, packets, bytes, counts or tcp flag counts allows you to qualify if a ToR node is being used to download masses of data or is trickling out data.

Patterns of data can be very telling as to what is the nature of the data transfer and can be used in conjunction with other information to become a useful indicator of the risk. As for supposedly secured networks I can’t think of any instance where ToR/Onion routing or for that matter any external VPN or Proxy service is needed to be used from within what is supposed to be a locked environment. Once ToR traffic has been identified communicating in a sensitive environment it is essential to immediately investigate and stop the IP addresses engaging in this suspicious behavior.

Using a tool like CySight’s advanced End-Point Threat Detection allows NetFlow data to be correlated against hundreds of thousands of IP addresses of questionable reputation including ToR exits and relays in real-time with comprehensive historical forensics that can be deployed in a massively parallel architecture.

Performance Monitoring & Security Forensics: The 1-2 Punch for Network and IT Infrastructure Visibility