Archives

Posts Tagged ‘Network Forensics’

Big Data – A Global Approach To Local Threat Detection

From helping prevent loss of life in the event of a natural disaster, to aiding marketing teams in designing more targeted strategies to reach new customers, big data seems to be the chief talking point amongst a broad and diverse circle of professionals.

For Security Engineers, big data analytcs is proving to be an effective defense against evolving network intrusions thanks to the delivery of near real-time insights based on high volumes of diverse network data. This is largely thanks to technological advances that have resulted in the capacity to transmit, capture, store and analyze swathes of data through high-powered and relatively low-cost computing systems.

In this blog, we’ll take a look at how big data is bringing deeper visibility to security teams as environments increase in complexity and our reliance on pervading network systems intensifies.

Big data analysis is providing answers to the data deluge dilemma

Large environments generate gigabytes of raw user, application and device metrics by the minute, leaving security teams stranded in a deluge of data. Placing them further on the back foot is the need to sift through this data, which involves considerable resources that at best only provide a retrospective view on security breaches.

Big data offers a solution to the issue of “too much data too fast” through the rapid analysis of swathes of disparate metrics through advanced and evolving analytical platforms. The result is actionable security intelligence, based on comprehensive datasets, presented in an easy-to-consume format that not only provides historic views of network events, but enables security teams to better anticipate threats as they evolve.

In addition, big data’s ability to facilitate more accurate predictions on future events is a strong motivating factor for the adoption of the discipline within the context of information security.

Leveraging big data to build the secure networks of tomorrow

As new technologies arrive on the scene, they introduce businesses to new opportunities – and vulnerabilities. However, the application of Predictive AI Baselining analytics to network security in the context of the evolving network is helping to build the secure, stable and predictable networks of tomorrow. Detecting modern, more advanced threats requires big data capabilities from incumbent intrusion prevention and detection (IDS\IPS) solutions to distinguish normal traffic from potential threats.

By contextualizing diverse sets of data, Security Engineers can more effectively detect stealthily designed threats that traditional monitoring methodologies often fail to pick up. For example, Advanced Persistent Threats (APT) are notorious for their ability to go undetected by masking themselves as day-to-day network traffic. These low visibility attacks can occur over long periods of time and on separate devices, making them difficult to detect since no discernible patterns arise from their activities through the lens of traditional monitoring systems.

Big data Predictive AI Baselining analytics lifts the veil on threats that operate under the radar of traditional signature and log-based security solutions by contextualizing traffic and giving NOCs a deeper understanding of the data that traverses the wire.

Gartner states that, “Big data Predictive AI Baselining analytics enables enterprises to combine and correlate external and internal information to see a bigger picture of threats against their enterprises.”  It also eliminates the siloed approach to security monitoring by converging network traffic and organizing it in a central data repository for analysis; resulting in much needed granularity for effective intrusion detection, prevention and security forensics.

In addition, Predictive AI Baselining analytics eliminates barriers to internal collaborations between Network, Security and Performance Engineers by further contextualizing network data that traditionally acted as separate pieces of a very large puzzle.

So is big data Predictive AI Baselining analytics the future of network monitoring?

In a way, NOC teams have been using big data long before the discipline went mainstream. Large networks have always produced high volumes of data at high speeds – only now, that influx has intensified exponentially.

Thankfully, with the rapid evolution of computing power at relatively low cost, the possibilities of what our data can tell us about our networks are becoming more apparent.

The timing couldn’t have been more appropriate since traditional perimeter-based IDS\IPS no longer meet the demands of modern networks that span vast geographical areas with multiple entry points.

In the age of cloud, mobility, ubiquitous Internet and the ever-expanding enterprise environment, big data capabilities will and should become an intrinsic part of virtually every security apparatus.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

How to Improve Cyber Security with Advanced Netflow Network Forensics

Most organizations today deploy network security tools that are built to perform limited prevention – traditionally “blocking and tackling” at the edge of a network using a firewall or by installing security software on every system.

This is only one third of a security solution, and has become the least effective measure.

The growing complexity of the IT infrastructure is the major challenge faced by existing network security tools. The major forces impacting current network security tools are the rising level of sophistication of cybercrimes, growing compliance and regulatory mandates, expanding virtualization of servers and the constant need for visibility compounded by ever-increasing data volumes. Larger networks involve enormous amounts of data, into which the incident teams must have a high degree of visibility for analysis and reporting purposes.

An organization’s network and security teams are faced with increasing complexities, including network convergence, increased data and flow volumes, intensifying security threats, government compliance issues, rising costs and network performance demands.

With network visibility and traceability also top priorities, companies must look to security network forensics to gain insight and uncover issues. The speed with which an organization can identify, diagnose, analyze, and respond to an incident will limit the damage and lower the cost of recovery.

Analysts are better positioned to mitigate risk to the network and its data through security focused network forensics applied at the granular level. Only with sufficient granularity and historic visibility and tools that are able to machine learn from the network Big Data can the risk of an anomaly be properly diagnosed and mitigated.

Doing so helps staff identify breaches that occur in real-time, as well as Insider threats and data leaks that take place over a prolonged period. Insider threats are one of the most difficult to detect and are missed by most security tools.

Many network and security professionals assume that they can simply analyze data captured using their standard security devices like firewalls and intrusion detection systems, however they quickly discover limitations as these devices are not designed for and cannot record and report on every transaction due to lack of deep visibility, scalability and historic data retention making old fashioned network forensic reporting expensive and impractical.

NetFlow is an analytics software technology that enables IT departments to accurately audit network data and host-level activity. It enhances network security and performance making it easy to identify suspicious user behaviors to protect your entire infrastructure.

A well-designed NetFlow forensic tool should include powerful features that can allow for:

  • Micro-level data recording to assist in identification of real-time breaches and data leaks;
  • Event notifications and alerts for network administrators when irregular traffic movements are detected;
  • Tools that highlight trends and baselines, so IT staff can provision services accordingly;
  • Tools that learn normal behavior, so Network Security staff can quickly detect and mitigate threats;
  • Capture highly granular traffic over time to enable deep visibility across the entire network infrastructure;
  • 24-7 automation, flexible reporting processes to deliver usable business intelligence and security forensics specifically for those analytics that can take a long time to produce.

Forensic analysts require both high-level and detailed visibility through aggregating, division and drilldown algorithms such as:

  • Deviation / Outlier analysis
  • Bi-directional analysis
  • Cross section analysis
  • Top X/Y analysis
  • Dissemination analysis
  • Custom Group analysis
  • Baselining analysis
  • Percentile analysis
  • QoS analysis
  • Packet Size analysis
  • Count analysis
  • Latency and RTT analysis

Further when integrated with a visual analytics process it will enable additional insights to the forensic professional when analyzing subsets of the flow data surrounding an event.

In some ways it needs to act as a log analyzer, security information and event management (SIEM) and a network behavior anomaly and threat detector all rolled into one.

The ultimate goal is to deploy a multi-faceted flow-analytics solution that can compliment your business by providing extreme visibility and eliminating network blindspots, both in your physical infrastructure and in the cloud, automatically detecting and diagnosing your entire network for anomalous traffic and improving your mean time to detect and repair.

Performance Monitoring & Security Forensics: The 1-2 Punch for Network and IT Infrastructure Visibility

5 Perks of Network Performance Management

Network performance management is something that virtually every business needs, but not something that every business is actively doing, or even aware of.  And why should they?

While understanding the technical side of things is best left to the IT department, understanding the benefits of a properly managed network is something that will help get the business managers on board, especially when good performance management solutions might be a cost that hadn’t been considered.  So what are the benefits?

1.  Avoiding downtime – Downtime across an entire network is going to be rare, but downtime in small areas of the network are possible if it gets overloaded.  Downtime of any kind is just not something that business can tolerate, for a few reasons:

  • it leaves that area of the network unmonitored, which is a serious security issue
  • shared files won’t be accessible, nor will they be updating as users save the files.  This will lead to multiple versions of the same file, and quite a few headaches when the network is accessible again
  • downtime that affects customers is even worse, and can result in lost revenue or negative customer experiences

2.  Network speed – This is one of the most important and easily quantified aspects of managing netflow.  It will affect every user on the network constantly, and anything that slows down users means either more work hours or delays.  Obviously, neither of these is a good problem to have.  Whether it’s uploading a file, sending a file to a coworker, or sending a file to a client; speed is of paramount importance.

3.  Scalability – Almost every business wants to grow, and nowhere is that more true than the tech sector.  As the business grows, the network will have to grow with it to support more employees and clients.  By managing the performance of the network, it is very easy to see when or where it is being stretched too thin or overwhelmed.  As performance degrades, it’s very easy to set thresholds that show when the network need upgraded or enlarged.

4.  Security – Arguably the most important aspect of network management, even though it might not be thought of as a performance aspect.  An unsecured network is worse than a useless network, and data breaches can ruin a company.  So how does this play into performance management?

By monitoring netflow performance, it’s easy to see where the most resources are being used.  Many security attacks drain resources, so if there are resource spikes in unusual areas it can point to a security flaw.  With proper software, these issues can be not only monitored, but also recorded and corrected.

5.  Usability – Unfortunately, not all employees have a working knowledge of how networks operate.  In fact, as many in IT support will attest, most employees aren’t tech savvy.  However, most employees will need to use the network as part of their daily work.  This conflict is why usability is so important.  The easiest way to minimize training costs with any network management program is to ensure that it is as user-friendly as possible.

The fanciest, most impressive network performance management system isn’t worth anything if no one knows how to use and optimize it properly.  Even if the IT department has no issues with it, the reports and general information should be as easy to decipher as possible.

Is your network as optimized as it could be?  Are you able to monitor the network’s performance and flow,  or do network forensics to determine where issues are?  Don’t try to tackle all of this on your own, contact us and let us help you support your business with the best network monitoring for your specific needs.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

Balancing Granularity Against Network Security Forensics

With the pace at which the social, mobile, analytics and cloud (SMAC) stack is evolving, IT departments must quickly adopt their security monitoring and prevention strategies to match the ever-changing networking landscape. By the same token, network monitoring solutions (NMS) developers must balance a tightrope of their own in terms of providing the detail and visibility their users need, without a cost to network performance. But much of security forensics depends on the ability to drill down into both live and historic data to identify how intrusions and attacks occur. This leads to the question: what is the right balance between collecting enough data to gain the front foot in network security management, and ensuring performance isn’t compromised in the process?

Effectively identifying trends will largely depend on the data you collect

Trend and pattern data tell Security Operations Center (SOC) staff much about their environments by allowing them to connect the dots in terms of how systems may have become compromised. However, collecting large portions of historic data requires the capacity to house it – something that can quickly become problematic for IT Departments. Netflow data analysis acts as a powerful counterweight to the problem of processing and storing chunks of data, since it collects compressed header information that is far less resource-intensive than entire packets or investigating entire device log files, for example. Also, log files are often hackers’ first victims by way of deletion or corruption as a means to disguise attacks or intrusions. With NetFlow Auditor’s ability to collect vast quantities of uncompromised transaction data without exhausting device resources, SOCs are able to perform detailed analyses on flow information that could reveal security issues such as data leaks that occur over time. Taking into account that Netflow security monitoring can easily be configured on most devices, and pervasive security monitoring becomes relatively easy to configure in large environments.

Netflow security monitoring can give SOCs real-time security metrics

Netflow, when retained at high granularity, can facilitate seamless detection of traffic anomalies as they occur and when coupled with smart network behavior anomaly detection (NBAD), can alert engineers when data traverses the wire in an abnormal way – allowing for both quick detection and containment of compromised devices or entire segments. Network intrusions are typically detected when data traverses the environment in an unusual way and compromised devices experience spikes in multiple network telemetry metrics. As malicious software attempts to siphon information from systems, the resultant increase in out-of-the-norm activity will trigger warnings that can bring SOC teams in the loop of what is happening. IdeaData’s NetFlow Auditor employs machine learning that continuously compares multi-metric baselines against current network activity and quickly picks up on anomalies overlooked by other flow solutions, even before they constitute a system-wide threat. This type of behavioral analysis of network traffic places security teams on the front foot in the ongoing battle against malicious attacks on their systems.

Network metrics are being generated on a big data scale

Few things can undermine a network’s performance and risk more than a monitoring solution that strains to provide anticipated visibility. However, considering the increasing complexity of distributed connected assets and the ways and speed in which people and IoT devices are being plugged into networks today, pervasive and detailed monitoring is absolutely crucial. Take the bring your own device (BYOD) phenomenon and the shift to the cloud, for example. Networking and security teams need visibility into where, when, and how mobile phones, tablets, smart watches, and IoT devices are going on and offline and how to better manage the flow of data to and from user devices. Mobile devices increasingly run their own versions of business applications and with BYOD cultures somewhat undermining IT’s ability to dictate the type of software allowed to run on personal devices, the need to monitor traffic flow from such devices – from both a security and a performance perspective – becomes clear.

General Netflow performance analytics tools are capable of informing NOC teams about how large IP traffic flows between devices, with basic usage statistics on a device or segment level. However, when network metrics are generated on a big data scale, traffic anomalies that require SOC investigation get lost in leaky bucket sorting algorithms of basic tools. Detecting the real underlying reasons for traffic degradation or identifying risky communications such as Ransomware, DDoS, slowDoS, peer-to-peer (p2p), the dark web (ToR), and having complete historical visibility to trackback undesirable applications become absolutely critical, but far less difficult, with NetFlow Auditor’s ability to easily provide information on all of the traffic that traverses the environment.

NetFlow security monitoring evolves alongside technology organically

Thanks to Netflow and the unique design and multi-metric approach that IdeaData has implemented, as systems evolve at an increasing rate, it doesn’t mean you need to re-invent your security apparatus every six months or so. NetFlow Auditor’s ubiquity, reliability, and flexibility give NOC and SOC teams deep visibility minus the administrative overheads in getting it up and running along with collecting and benefiting from big flow data’s deep insights. You can even fine-tune your monitoring to give you the right granularity you need to keep your systems safe, secure, and predictable. This results in fewer network blind spots that often act as the Achilles Heel of the modern security and network experts.

On the other end of the scale, Netflow analyzers – in their varying feature sets – give NOCs some basic ability to collect, analyze, and detect from within-the-top bandwidth metrics which some engineers may still believe is the most pertinent to their needs. Once you’ve decided on the data you need today whilst keeping an eye on what you need tomorrow, it’s now time to choose the collector that does the job best.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

Why NetFlow is Perfect for Forensics and Compliance

Netflow forensic investigations can produce the report evidence that can be used in court as it describes the movement of the traffic data even without necessarily describing its contents.

It’s therefore crucial that the Netflow solution deployed can scale in archival to allow full context of all the flow data and not just the top of the data or the data relating to one tools idea of a security event.

The issue with Forensics and flow data is that in order to achieve full compliance its necessary to retain a data warehouse that can eventuate in a huge amount of flow records.

These records, retained in the data warehouse may not seem important at the time of collection but become critical to uncover behavior that may have been occurring over a long period and to ascertain the damage of the traffic behavior. I am talking broadly here as there are so many different instances where the data suddenly becomes critically important and it’s hard to do it justice by explaining one or two case studies. Remember you don’t know what you don’t know but when you discover what you didn’t know you need to have the ability to quantify the loss or the risk of loss.

How much flow data is enough to retain to satisfy compliance?

From our experience it is usually between 3-24 months depending on the size of the environment and the legal compliance relating to data protection or data retention. For most corporates we would recommend 12 months as a best practice. Data retention in ISP land in some countries requires the ability to analyze traffic for up to 2 years. Fortunately disk today is cheap and flow is cost effective to deploy across the organization. There is more information about this in our Performance and Security eBook.

Once a security issue has been identified the flow database can be available to quantify exactly what IP’s accessed a system, the times the system was accessed as well as quantifying the impact on dependent systems that the host conversed with directly or indirectly on the network before and after the issue.

Trawling through huge collection of flow-data can be a lengthy task and its necessary to have the ability to run automated Predictive AI Baselining analytics and parallel Predictive AI Baselining analytics to gauge damage from a long term inside threat that could have been dribbling out your intellectual property slowly over a few months.

Performance Monitoring & Security Forensics: The 1-2 Punch for Network and IT Infrastructure Visibility

3 Ways Anomaly Detection Enhances Network Monitoring

With the increasing abstraction of IT services beyond the traditional server room computing environments have evolved to be more efficient and also far more complex. Virtualization, mobile device technology, hosted infrastructure, Internet ubiquity and a host of other technologies are redefining the IT landscape.

From a cybersecurity standpoint, the question is how to best to manage the growing complexity of environments and changes in network behavior with every introduction of new technology.

In this blog, we’ll take a look at how anomaly detection-based systems are adding an invaluable weapon to Security Analysts’ arsenal in the battle against known – and unknown – security risks that threaten the stability of today’s complex enterprise environments.

Put your network traffic behavior into perspective

By continually analyzing traffic patterns at various intersections and time frames, performance and security baselines can be established, against which potential malicious activity is monitored and managed. But with large swathes of data traversing the average enterprise environment at any given moment, detecting abnormal network behavior can be difficult.

Through filtering techniques and algorithms based on live and historical data analysis, anomaly detection systems are capable of detecting even the most subtly crafted malicious software that may pose as normal network behavior. Also, anomaly-based systems employ machine-learning capabilities to learn about new traffic as it is introduced and provide greater context to how data traverses the wire, thus increasing its ability to identify security threats as they are introduced.

Netflow is a popular tool used in the collection of network traffic for building accurate performance and cybersecurity baselines with which to establish normal network activity patterns from potentially alarming network behavior.

Anomaly detection places Security Analysts on the front foot

An anomaly is defined as an action or event that is outside of the norm. But when a definition of what is normal is absent, loopholes can easily be exploited. This is often the case with signature-based detection systems that rely on a database of pre-determined virus signatures that are based on known threats. In the event of a new and yet unknown security threat, signature-based systems are only as effective as their ability to respond to, analyze and neutralize such new threats.

Since signatures do work well against known attacks, they are by no means paralyzed against defending your network. Signature-based systems lack the flexibility of anomaly-based systems in the sense that they are incapable of detecting new threats. This is one of the reasons signature-based systems are typically complemented by some iteration of a flow based anomaly detection system.

Anomaly based systems are designed to grow alongside your network

The chief strength behind anomaly detection systems is that they allow Network Operation Centers (NOCs) to adapt their security apparatus according to the demands of the day. With threats growing in number and sophistication, detection systems that can discover, learn about and provide preventative methodologies  are the ideal tools with which to combat the cybersecurity threats of tomorrow. NetFlow Anomaly detection with automated diagnostics does exactly this by employing machine learning techniques to network threat detection and in so doing, automating much of the detection aspect of security management while allowing Security Analysts to focus on the prevention aspect in their ongoing endeavors to secure their information and technological investments.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

Benefits of Network Security Forensics

The networks that your business operates on are often open and complex.

Your IT department is responsible for mitigating network risks, managing performance and auditing data to ensure functionality.

Using NetFlow forensics can help your IT team maintain the competitiveness and reliability of the systems required to run your business.

In IT, network security forensics involves the monitoring and analysis of your network’s traffic to gather information, obtain legal evidence and detect network intrusions.

These activities help keep your company perform the following actions.

  • Adjust to increased data and NetFlow volumes
  • Identify heightened security vulnerabilities and threats
  • Align with corporate and legislative compliance requirements
  • Contain network costs
  • Analyze network performance demands
  • Recommend budget-friendly implementations and system upgrades

NetFlow forensics helps your company maintain accountability and trace usage; these functions become increasingly difficult as your network becomes more intricate.

The more systems your network relies on, the more difficult this process becomes.

While your company likely has standard security measures in place, e.g. firewalls, intrusion detection systems and sniffers, they lack the capability to record all network activity.

Tracking all your network activity in real-time at granular levels is critical to the success of your organization.

Until recently, the ability to perform this type of network forensics has been limited due to a lack of scalability.

Now, there are web-based solutions that can collect and store this data to assist your IT department with this daunting task.

Solution capabilities include:

  • Record NetFlow data at a micro level
  • Discover security breaches and alert system administrators in real-time
  • Identify trends and establish performance baselines
  • React to irregular traffic movements and applications
  • Better provisioning of network services

The ability to capture all of this activity will empower your IT department to provide more thorough analysis and take faster action to resolve system issues.

But, before your company can realize the full value of NetFlow forensics, your team needs to have a clear understanding of how to use this intelligence to take full advantage of these detailed investigative activities.

Gathering the data through automation is a relatively simple process once the required automation tools have been implemented.

Understanding how to organize these massive amounts of data into clear, concise and actionable findings is an additional skill set that must be developed within your IT team.

Having a team member, whether internal or via a third-party vendor, that can aggregate your findings and create visual representations that can be understood by non-technical team members is a necessary part of NetFlow forensics. It is important to stress the necessity of visualization; this technique makes it much easier to articulate the importance of findings.

In order to accurately and succinctly visualize security issues, your IT staff must have a deep understanding of the standard protocols of your network. Without this level of understanding, the ability to analyze and investigate security issues is limited, if not impossible.

Utilizing a software to support the audit functions required to perform NetFlow forensics will help your company support the IT staff in the gathering and tracking of these standard protocols.

Being able to identify, track and monitor the protocols in an automated manner will enhance your staff’s ability to understand and assess the impact of these protocols on network performance and security. It will also allow you to quickly assess the impact of changes driven by real-time monitoring of your network processes.

Sound like a daunting task?

It doesn’t have to be. Choose a partner to support your efforts and help you build the right NetFlow forensics configuration to support your business.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health