Archives

Category Archive for ‘Ransomware’

Deep Packet Inspection (DPI) becomes Obsolete as Encryption hits Critical Mass

Increasing cyber-crimes, virtualization, regulatory obligations, and a severe shortage of cyber and network security personnel are impacting organizations. Encryption, IT complexity, surface scraping and siloed information hinder security and network visibility.

Encryption has become the new normal, driven by privacy and security concerns. Enterprises are finding it increasingly more difficult to figure out what traffic is bad and what isn’t. Encryption’s exponential adoption has created a significant security visibility challenge globally. Threat actors are now using the lack of decryption to avoid detection.

Encrypted data cannot be analyzed, making network risks harder or impossible to see. More than 95% of internet traffic is now encrypted, denying Deep Packet Inspection (DPI) and other tools that use decrypted packets to inspect traffic and identify risks.

DPI and other techniques that decode packets to detect threats have traditionally been expensive to deploy and maintain and have now entered obsolescence.

As the threat surface grows, organizations have less intelligence to identify and manage threats. Only 1% of network data is preserved by 99% of other kinds of network and cyber technologies causing severe network blindspots, leading security and networking professionals to ignore real dangers.

CySight provides the most precise cyber detection and forensics for on-premises and cloud networks. CySight has 20x more visibility than all of its competitors combined-  substantially improving Security, Application visibility, Zero Trust and Billing. It provides a completely integrated, agentless, and scalable Network, Endpoint, Extended, Availability, Compliance, and Forensics solution- without packet decryption.

CySight uses Flow from most networking equipment. It compares traffic to global threat characteristics to detect infected hosts, Ransomware, DDoS, and other suspicious traffic. CySight’s integrated solution provides network, cloud, IoT, and endpoint security and visibility without packet decryption to detect and mitigate hazards.

Using readily available data sources, CySight records flows at unparalleled depth, in a compact footprint, correlating context and using Machine Learning, Predictive AI and Zero Trust micro segmentation.  CySight identifies and addresses risks, triaging security behaviors and end-point threats with multi-focal telemetry, and contextual information to provide full risk detection and mitigation that other solutions cannot.

Cyberwar Defense using Predictive AI Baselining

The world is bracing for a worldwide cyberwar as a result of the current political events. Cyberattacks can be carried out by governments and hackers in an effort to destabilize economies and undermine democracy. Rather than launching cyberattacks, state-funded cyber warfare teams have been studying vulnerabilities for years.

An important transition has occurred, and it is the emergence of bad actors from unfriendly countries that must be taken seriously. The most heinous criminals in this new cyberwarfare campaign are no longer hiding. Experts now believe that a country could conduct more sophisticated cyberattacks on national and commercial networks. Many countries are capable of conducting cyberattacks against other countries, and all parties appear to be prepared for cyber clashes.

So, how would cyberwarfare play out, and how can organizations defend against them?

The first step is to presume that your network has been penetrated or will be compromised soon, and that several attack routes will be employed to disrupt business continuity or vital infrastructure.

Denial-of-service (DoS/DDoS) attacks are capable of spreading widespread panic by overloading network infrastructures and network assets, rendering them inoperable, whether they are servers, communication lines, or other critical technologies in a region.

In 2021, ransomware became the most popular criminal tactic, but country cyber warfare teams in 2022 are now keen to use it for first strike, propaganda and military fundraising. It is only a matter of time before it escalates. Ransomware tactics are used in politically motivated attacks to encrypt computers and render them inoperable. Despite using publicly accessible ransomware code, this is now considered weaponized malware because there is little to no possibility that a key to decode will be released. Ransomware assaults by financially motivated criminals have a different objective, which must be identified before causing financial and social damage, as detailed in a recent RANSOMWARE PAPER

To win the cyberwar on either cyber extortion or cyberwarfare attacks, you must first have complete 360-degree view into your network and deep transparency and intelligent context to detect dangers within your data.

Given what we already know and the fact that more is continually being discovered, it makes sense to evaluate our one-of-a-kind integrated Predictive AI Baselining and Cyber Detection solution.

YOU DON’T KNOW WHAT YOU DON’T KNOW!

AND IT’S WHAT WE DON’T SEE THAT POSES THE BIGGEST THREATS AND INVISIBLE DANGERS!

You may be surprised to learn that most tools lack the REAL Visibility that could have prevented attacks on a network and its local and cloud-connected assets. There are some serious shortcomings in the base designs of other flow solutions that result in their inability to scale in retention.

This is why smart analysts are realizing that Threat Intelligence and Flow Analytics today are all about having access to long-term granular intelligence. From a forensics perspective, you would appreciate that you can only analyze the data you retain, and with large and growing network and cloud data flows most tools (regardless of their marketing claims) actually cannot scale in retention and choose to drop records in lieu of what they believe is salient data.

Imputed outcome data leads to misleading results and missing data causes high risk and loss!

Funnel_Loss_Plus_Text

So how exactly do you go about defending your organizations network and connected assets?

Our approach with CySight focuses on solving Cyber and Network Visibility using granular Collection and Retention with machine learning and A.I.

CySight was designed from the ground up with specialized metadata collection and retention techniques thereby solving the issues of archiving huge flow feeds in the smallest footprint and the highest granularity available in the marketplace.

Network issues are broad and diverse and can occur from many points of entry, both external and internal. The network may be used to download or host illicit materials and leak intellectual property.

Additionally, ransomware and other cyber-attacks continue to impact businesses. So you need both machine learning and End-Point threats to provide a complete view of risk.

The Idea of flow-based analytics is simple yet potentially the most powerful tool to find ransomware and other network and cloud issues. All the footprints of all communications are sent in the flow data and given the right tools you could retain all the evidence of an attack or infiltration or exfiltration.

However, not all flow analytic solutions are created equal and due to the inability to scale in retention the Netflow Ideal becomes unattainable. For a recently discovered Ransomware or Trojan, such as “Wannacry”, it is helpful to see if it’s been active in the past and when it started.

Another important aspect is having the context to be able to analyze all the related traffic to identify concurrent exfiltration of an organization’s Intellectual Property and to quantify and mediate the risk. Threat hunting for RANSOMWARE requires multi-focal analysis at a granular level that simply cannot be attained by sampling methods. It does little good to be alerted to a possible threat without having the detail to understand context and impact. The Hacker who has control of your system will likely install multiple back-doors on various interrelated systems so they can return when you are off guard.

CySight Turbocharges Flow and Cloud analytics for SecOps and NetOps

As with all CySight Predictive AI Baselining analytics and detection, you don’t have to do any heavy lifting. We do it all for you!

There is no need to create or maintain special groups with Ransomware or other endpoints of ill-repute. Every CySight instance is built to keep itself aware of new threats that are automatically downloaded in a secure pipe from our Threat Intelligence qualification engine that collects, collates, and categorizes threats from around the globe or from partner threat feeds.

CySight Identifies your systems conversing with Bad Actors and allows you to backtrack through historical data to see how long it’s been going on.

Summary

IdeaData’s CySight software is capable of the highest level of granularity, scalability, and flexibility available in the network and cloud flow metadata market and supports the broadest range of flow-capable vendors and flow logs.

CySight’s Predictive AI Baselining, Intelligent Visibility, Dropless Collection, automation, and machine intelligence reduce the heavy lifting in alerting, auditing, and discovering your network making threat intelligence, anomaly detection, forensics, compliance, performance analytics and IP accounting a breeze!

Let us help you today. Please schedule a time to meet https://calendly.com/cysight/

Advanced Predictive AI leveraging Granular Flow-Based Network Analytics.

IT’S WHAT YOU DON’T SEE THAT POSES THE BIGGEST THREATS AND INVISIBLE DANGERS.

Existing network management and network security point solutions are facing a major challenge due to the increasing complexity of the IT infrastructure.

The main issue is a lack of visibility into all aspects of physical network and cloud network usage, as well as increasing compliance, service level management, regulatory mandates, a rising level of sophistication in cybercrime, and increasing server virtualization.

With appropriate visibility and context, a variety of network issues can be resolved and handled by understanding the causes of network slowdowns and outages, detecting cyber-attacks and risky traffic, determining the origin and nature, and assessing the impact.

It’s clear that in today’s work-at-home, cyberwar, ransomware world, having adequate network visibility in an organization is critical, but defining how much visibility is considered “right” visibility is becoming more difficult, and more often than not even well-seasoned professionals make incorrect assumptions about the visibility they think they have. These misperceptions and malformed assumptions are much more common than you would expect and you would be forgiven for thinking you have everything under control.

When it comes to resolving IT incidents and security risks and assessing the business impact, every minute counts. The primary goal of Predictive AI Baselining coupled with deep contextual Network Forensics is to improve the visibility of Network Traffic by removing network blindspots and identifying the sources and causes of high-impact traffic.

Inadequate solutions (even the most well-known) mislead you into a false level of comfort but as they tend to only retain the top 2% or 5% of network communications frequently cause false positives and red herrings. Cyber threats can come from a variety of sources. These could be the result of new types of crawlers or botnets, infiltration and ultimately exfiltration that can destroy a business.

Networks are becoming more complex. Because of negligence, failing to update and patch security holes, many inadvertent threats can open the door to malicious outsiders. Your network could be used to download or host illegal materials, or it could be used entirely or partially to launch an attack. Ransomware attacks are still on the rise, and new ways to infiltrate organizations are being discovered. Denial of Service (DoS) and distributed denial of service (DDoS) attacks continue unabated, posing a significant risk to your organization. Insider threats can also occur as a result of internal hacking or a breach of trust, and your intellectual property may be slowly leaked as a result of negligence, hacking, or being compromised by disgruntled employees.

Whether you are buying a phone a laptop or a cyber security visibility solution the same rule applies and that is that marketers are out to get your hard-earned cash by flooding you with specifications and solutions whose abilities are radically overstated. Machine Learning  (ML) and Artificial Intelligence (AI) are two of the most recent to join the acronyms. The only thing you can know for sure dear cyber and network professional reader is that they hold a lot of promise.

One thing I can tell you from many years of experience in building flow analytics, threat intelligence, and cyber security detection solutions is that without adequate data your results become skewed and misleading. Machine Learning and AI enable high-speed detection and mitigation but without Granular Analytics (aka Big Data) you won’t know what you don’t know and neither will your AI!

In our current Covid world we have all come to appreciate, in some way, the importance of big data, ML and AI that if properly applied, just how quickly it can help mitigate a global health crisis. We only have to look back a few years when drug companies didn’t have access to granular data the severe impact that poor data had on people’s lives. Thalidomide is one example. In the same way, when cyber and network visibility solutions are only surface scraping data information will be incorrect and misleading and could seriously impact your network and the livelihoods of the people you work for and together with.

The Red Pill or The Blue Pill?

The concept of flow or packet-based analytics is straightforward, yet they have the potential to be the most powerful tools for detecting ransomware and other network and cloud-related concerns. All communications leave a trail in the flow data, and with the correct tools, you can recover all evidence of an assault, penetration, or exfiltration.

Not all analytic systems are made equal, and the flow/packet ideals become unattainable for other tools because of their difficulty to scale in retention. Even well-known tools have serious flaws and are limited in their ability to retain complete records, which is often overlooked. They don’t effectively provide the visibility of the blindspots they claimed.

As already pointed out, over 95% of network and deep packet inspection (DPI) solutions struggle to retain even 2% to 5% of all data captured in medium to large networks, resulting in completely missing diagnoses and delivering significantly misleading analytics that leads to misdiagnosis and risk!

It is critical to have the context and visibility necessary to assess all relevant traffic to discover concurrent intellectual property exfiltration and to quantify and mitigate the risk. It’s essential to determine whether a newly found Trojan or Ransomware has been active in the past and when it entered and what systems are still at risk.

Threat hunting demands multi-focal analysis at a granular level that sampling, and surface flow analytics methods just cannot provide. It is ineffective to be alerted to a potential threat without the context and consequence. The Hacker who has gained control of your system is likely to install many backdoors on various interconnected systems to re-enter when you are unaware. As Ransomware progresses it will continue to exploit weaknesses in Infrastructures.

Often those most vulnerable are those who believe they have the visibility to detect.

Network Matrix of Knowledge

Post-mortem analysis of incidents is required, as is the ability to analyze historical behaviors, investigate intrusion scenarios and potential data breaches, qualify internal threats from employee misuse, and quantify external threats from bad actors.

The ability to perform network forensics at a granular level enables an organization to discover issues and high-risk communications happening in real-time, or those that occur over a prolonged period such as data leaks. While standard security devices such as firewalls, intrusion detection systems, packet brokers or packet recorders may already be in place, they lack the ability to record and report on every network traffic transfer over the long term.

According to industry analysts, enterprise IT security necessitates a shift away from prevention-centric security strategies and toward information and end-user-centric security strategies focused on an infrastructure’s endpoints, as advanced targeted attacks are poised to render prevention-centric security strategies obsolete and today with Cyberwar a reality that will impact business and government alike.

As every incident response action in today’s connected world includes a communications component, using an integrated cyber and network intelligence approach provides a superior and cost-effective way to significantly reduce the Mean Time To Know (MTTK) for a wide range of network issues or risky traffic, reducing wasted effort and associated direct and indirect costs.

Understanding The shift towards Flow-Based Metadata

for Network and Cloud Cyber-Intelligence

  • The IT infrastructure is continually growing in complexity.
  • Deploying packet capture across an organization is costly and prohibitive especially when distributed or per segment.
  • “Blocking & tackling” (Prevention) has become the least effective measure.
  • Advanced targeted attacks are rendering prevention‑centric security strategies obsolete.
  • There is a Trend towards information and end‑user centric security strategies focused on an infrastructure’s end‑points.
  • Without making use of collective sharing of threat and attacker intelligence you will not be able to defend your business.

So what now?

If prevention isn’t working, what can IT still do about it?

  • In most cases, information must become the focal point for our information security strategies. IT can no longer control invasive controls on user’s devices or the services they utilize.

Is there a way for organizations to gain a clear picture of what transpired after a security breach?

  • Detailed monitoring and recording of interactions with content and systems. Predictive AI Baselining, Granular Forensics, Anomaly Detection and Threat Intelligence ability is needed to quickly identify what other users were targeted, what systems were potentially compromised and what information was exfiltrated.

How do you identify attacks without signature-based mechanisms?

  • Pervasive monitoring enables you to identify meaningful deviations from normal behavior to infer malicious intent. Nefarious traffic can be identified by correlating real-time threat feeds with current flows. Machine learning can be used to discover outliers and repeat offenders.

Summing up

Network security and network monitoring have gone a long way and jumped through all kinds of hoops to reach the point they have today. Unfortunately, through the years, cyber marketing has surpassed cyber solutions and we now have misconceptions that can do considerable damage to an organization.

The biggest threat is always the one you cannot see and hits you the hardest once it has established itself slowly and comfortably in a network undetected. Complete visibility can only be accessed through 100% collection and retention of all data traversing a network, otherwise even a single blindspot will affect the entire organization as if it were never protected to begin with. Just like a single weak link in a chain, cyber criminals will find the perfect access point for penetration.

Inadequate solutions that only retain the top 2% or 5% of network communications frequently cause false positives and red herrings. You need to have 100% access to your comms data for Full Visibility, but how can you be sure that you will?

You need free access to Full Visibility to unlock all your data and an Intelligent Predictive AI technology that can autonomously and quickly identify what’s not normal at both the macro and micro level of your network, cloud, servers, iot devices and other network connected assets.

Get complete visibility wiith CySight now –>>>

5 Ways Flow Based Network Monitoring Solutions Need to Scale

Partial Truth Only Results in Assumptions

A common gripe for Network Engineers is that their current network monitoring solution doesn’t provide the depth of information needed to quickly ascertain the true cause of a network issue. Imagine reading a book that is missing 4 out of every 6 words, understanding the context will be hopeless and the book has near to no value. Many already have over-complicated their monitoring systems and methodologies by continuously extending their capabilities with a plethora of add-ons or relying on disparate systems that often don’t interface very well with each other. There is also an often-mistaken belief that the network monitoring solutions that they have invested in will suddenly give them the depth they need to have the required visibility to manage complex networks.

A best-value approach to network monitoring is to use a flow-based analytics methodology such as NetFlow, sFlow or IPFIX.

The Misconception & What Really Matters

In this market, it’s common for the industry to express a flow software’s scaling capability in flows-per-second. Using Flows-per-second as a guide to scalability is misleading as it is often used to hide a flow collector’s inability to archive flow data by overstating its collection capability and enables them to present a larger number considering they use seconds instead of minutes. It’s important to look not only at flows-per-second but to understand the picture created once all the elements are used together. Much like a painting of a detailed landscape, the finer the brush and the more colors used will ultimately provide the complete and truly detailed picture of what was being looked at when drawing the landscape.

Granularity is the prime factor to start focusing on, specifically referring to granularity retained per minute (flow retention rate). Naturally, speed impediment is a significant and critical factor to be aware of as well. The speed and flexibility of alerting, reporting, forensic depth, and diagnostics all play a strategic role but will be hampered when confronted with scalability limitations. Observing the behavior when impacted by high-flow-variance or sudden-bursts and considering the number of devices and interfaces can enable you to appreciate the absolute significance of scalability in producing actionable insights and analytics.  Not to mention the ability to retain short-term and historical collections, which provide vital trackback information, would be nonexistent. To provide the necessary visibility to accomplish the ever-growing number of tasks analysts and engineers must deal with daily along with resolving issues to completion, a Network Monitoring System (NMS) must have the ability to scale in all its levels of consumption and retention.

How Should Monitoring Solutions Scale?

A Flow-Based network monitoring software needs to scale in its collection of data in five ways:

Ingestion Capability – Also referred to as Collection, means the number of flows that can be consumed by a single collector. This is a feat that most monitoring solutions are able to accomplish, unfortunately, it is also the one they pride themselves on. It is an important ability but is only the first step of several crucial capabilities that will determine the quality of insights and intelligence of a monitoring system. Ingestion is only the ability to take in data, it does not mean “retention”, and therefore could do very little on its own.

Digestion Capability – Also referred to as Retention, means the number of flow records that can be retained by a single collector. The most overlooked and difficult step in the network monitoring world. Digestion / Flow retention rates are particularly critical to quantify as they dictate the level of granularity that allows a flow-based NMS to deliver the visibility required to achieve quality Predictive AI Baselining, Anomaly Detection, Network Forensics, Root Cause Analysis, Billing Substantiation, Peering Analysis, and Data Retention compliance. Without retaining data, you cannot inspect it beyond the surface level, losing the value of network or cloud visibility.

Multitasking Processes– Pertains to the multitasking strength of a solution and its ability to scale and spread a load of collection processes across multiple CPUs on a single server.  This seems like an obvious approach to the collection but many systems have taken a linear serial approach to handle and ingest multiple streams of flow data that don’t allow their technologies to scale when new flow generating devices, interfaces, or endpoints are added forcing you to deploy multiple instances of a solution which becomes ineffective and expensive.

Clustered Collection – Refers to the ability of a flow-based solution to run a single data warehouse that takes its input from a cluster of collectors as a single unit as a means to load balance. In a large environment, you mostly have very large equipment that sends massive amounts of data to collectors. In order to handle all that data, you must distribute the load amongst a number of collectors in a cluster to multiple machines that make sense of it instead of a single machine that will be overloaded. This ability enables organizations to scale up in data use instead of dropping it as they attempt to collect it.

Hierarchical Correlation – The purpose of Hierarchical correlation is to take information from multiple databases and aggregate them into a single Super SIEM. With the need to consume and retain huge amounts of data, comes the need to manage and oversee that data in an intelligent way. Hierarchical correlation is designed to enable parallel analytics across distributed data warehouses to aggregate their results. In the field of network monitoring, getting overwhelmed with data to the point where you cannot find what you need is a as useful as being given all the books in the world and asked a single question that is answered in only one.

Network traffic visibility is considerably improved by reducing network blindspots and providing qualified sources and reasons of communications that impair business continuity.The capacity to capture flow at a finer level allows for new Predictive AI Baselining and Machine Learning application analysis and risk mitigation.

There are so many critical abilities that a network monitoring solution must enable its user, all are affected by whether or not the solution can scale.

Visibility is a range and not binary, you do not have or don’t have visibility, its whether you have enough to achieve your goals and keep your organization productive and safe.

How NetFlow Solves for Mandatory Data Retention Compliance

Compliance in IT is not new and laws regulating how organizations should manage their customer data exist such as: HIPPA, PCI, SCADA and Network transaction logging has begun to be required of business. Insurance companies are gearing up to qualify businesses by the information they retain to protect their services and customer information. Government and industry regulations and enforcement are becoming increasingly stringent.

Most recently many countries have begun to implement Mandatory Data Retention laws for telecom service providers.

Governments require a mandatory data retention scheme because more and more crime is moving online from the physical world and ISP‘s are keeping less data and retaining it for a shorter time. This negatively impacts the investigative capabilities of law enforcement and security agencies that need timely information to help save lives by early spotting lone-wolf terrorists or protect vulnerable members of society from abuse by sexual deviants, ransomware or other crimes online.

Although there is no doubt as to the value of mandatory data retention schemes they are not without justifiable privacy, human rights and expense concerns to implement.

It takes a lot of cash, time and skills that many ISP’s and companies simply cannot afford. Internet and managed service providers and large organizations must take proper precautions to remain in compliance. Heavy fines, license and certification issues and other penalties can result from non-compliance with mandatory data retention requirements.

According to the Australian Attorney-General’s Department, Australian telecommunications companies must keep a limited set of metadata for two years. Metadata is information about a communication (the who, when, where and how)—not the content or substance of a communication (the what).

A commentator from the Sydney morning herald qualified“…Security, intelligence and law enforcement access to metadata which overrides personal privacy is now in contention worldwide…” and speculated that with the introduction of Australian metadata laws that “…this country’s entire communications industry will be turned into a surveillance and monitoring arm of at least 21 agencies of executive government. …”.

In Australia many smaller ISP’s are fearful that failing to comply will send them out of business. Internet Australia’s Laurie Patton said, “It’s such a complicated and fundamentally flawed piece of legislation that there are hundreds of ISPs out there that are still struggling to understand what they’ve got to do”.

As for the anticipated costs, a survey sent to ISPs by telecommunications industry lobby group Communications Alliance found that  “There is a huge variance in estimates for the cost to business of implementing data retention – 58 per cent of ISPs say it will cost between $10,000 and $250,000; 24 per cent estimate it will cost over $250,000; 12 per cent think it will cost over $1,000,000; some estimates go as high as $10 million.”

An important cost to consider in compliance is the ease of data reporting when requested by government or corporate compliance teams to produce information for a specific ipv4 or ipv6 address. If the data is stored in a data-warehouse that is difficult to filter this may cause the service provider to incur penalties or be seen to be non-complying. Flexible filtering and automated reporting is therefore critical to produce the forensics required for the compliance in a timely and cost effective manner.

Although there are different laws governing different countries the main requirement of mandatory data retention laws at ISP’s is to maintain sufficient information at a granular level in order to assist governments in finding bad actors such as terrorists, corporate espionage, ransom-ware and pedophiles. In some countries this means that telcos are required to keep data of the IP addresses users connect to, for up to 10 weeks and in others just the totals of subscriber usage for each IP used for up to 2 years.

Although information remains local to each country and governed by relevant privacy laws, the benefits to law enforcement in the future will eventually provide the ability to have the visibility to track relayed data such as communications used by Tor Browsers, Onion routers and Freenet beyond their relay and exit nodes.

There is no doubt in my mind that with heightened states of security and increasing online crime there is a global need for governments to intervene with online surveillance to protect children from exploitation, reduce terrorism and to build defensible infrastructures whilst at the same time implementing data retention systems that have the inbuilt smarts to enable a balance between compliance and privacy rather than just a blanket catch all. There is already an available solution for the Internet communications component based on Netflow that assists ISP’s to quickly comply at a low cost whilst properly allowing data retention rules to be implemented to limit intruding on an individual’s privacy.

NetFlow solutions are cheap to deploy and are not required to be deployed at every interface such as a packet analyzer and can use the existing router, switch or firewall investment to provide continuous network monitoring across the enterprise, providing the service provider or organization with powerful tools for data retention compliance.

NetFlow technology if sufficiently scalable, granular and flexible can deliver on the visibility, accountability and measurability required for data retention because it can include features that:

  • Supply a real-time look at network and host-based activities down to the individual user and device;
  • Increase user accountability for introducing security risks that impact the entire network;
  • Track, measure and prioritize network risks to reduce Mean Time to Know (MTTK) and Mean Time to Repair or Resolve (MTTR);
  • Deliver the data IT staff needs to engage in in-depth forensic analysis related to security events and official requests;
  • Seamlessly extend network and security monitoring to virtual environments;
  • Assist IT departments in maintaining network up-time and performance, including mission critical applications and software necessary to business process integrity;
  • Assess and enhance the efficacy of traditional security controls already in place, including firewalls and intrusion detection systems;
  • Capture and archive flows for complete data retention compliance

Compared to other analysis solutions, NetFlow can fill in the gaps where other technologies cannot deliver. A well-architected NetFlow solution can provide a comprehensive landscape of tools to help business and service providers to achieve and maintain data retention compliance for a wide range of government and industry regulations.

Performance Monitoring & Security Forensics: The 1-2 Punch for Network and IT Infrastructure Visibility

Big Data – A Global Approach To Local Threat Detection

From helping prevent loss of life in the event of a natural disaster, to aiding marketing teams in designing more targeted strategies to reach new customers, big data seems to be the chief talking point amongst a broad and diverse circle of professionals.

For Security Engineers, big data analytcs is proving to be an effective defense against evolving network intrusions thanks to the delivery of near real-time insights based on high volumes of diverse network data. This is largely thanks to technological advances that have resulted in the capacity to transmit, capture, store and analyze swathes of data through high-powered and relatively low-cost computing systems.

In this blog, we’ll take a look at how big data is bringing deeper visibility to security teams as environments increase in complexity and our reliance on pervading network systems intensifies.

Big data analysis is providing answers to the data deluge dilemma

Large environments generate gigabytes of raw user, application and device metrics by the minute, leaving security teams stranded in a deluge of data. Placing them further on the back foot is the need to sift through this data, which involves considerable resources that at best only provide a retrospective view on security breaches.

Big data offers a solution to the issue of “too much data too fast” through the rapid analysis of swathes of disparate metrics through advanced and evolving analytical platforms. The result is actionable security intelligence, based on comprehensive datasets, presented in an easy-to-consume format that not only provides historic views of network events, but enables security teams to better anticipate threats as they evolve.

In addition, big data’s ability to facilitate more accurate predictions on future events is a strong motivating factor for the adoption of the discipline within the context of information security.

Leveraging big data to build the secure networks of tomorrow

As new technologies arrive on the scene, they introduce businesses to new opportunities – and vulnerabilities. However, the application of Predictive AI Baselining analytics to network security in the context of the evolving network is helping to build the secure, stable and predictable networks of tomorrow. Detecting modern, more advanced threats requires big data capabilities from incumbent intrusion prevention and detection (IDS\IPS) solutions to distinguish normal traffic from potential threats.

By contextualizing diverse sets of data, Security Engineers can more effectively detect stealthily designed threats that traditional monitoring methodologies often fail to pick up. For example, Advanced Persistent Threats (APT) are notorious for their ability to go undetected by masking themselves as day-to-day network traffic. These low visibility attacks can occur over long periods of time and on separate devices, making them difficult to detect since no discernible patterns arise from their activities through the lens of traditional monitoring systems.

Big data Predictive AI Baselining analytics lifts the veil on threats that operate under the radar of traditional signature and log-based security solutions by contextualizing traffic and giving NOCs a deeper understanding of the data that traverses the wire.

Gartner states that, “Big data Predictive AI Baselining analytics enables enterprises to combine and correlate external and internal information to see a bigger picture of threats against their enterprises.”  It also eliminates the siloed approach to security monitoring by converging network traffic and organizing it in a central data repository for analysis; resulting in much needed granularity for effective intrusion detection, prevention and security forensics.

In addition, Predictive AI Baselining analytics eliminates barriers to internal collaborations between Network, Security and Performance Engineers by further contextualizing network data that traditionally acted as separate pieces of a very large puzzle.

So is big data Predictive AI Baselining analytics the future of network monitoring?

In a way, NOC teams have been using big data long before the discipline went mainstream. Large networks have always produced high volumes of data at high speeds – only now, that influx has intensified exponentially.

Thankfully, with the rapid evolution of computing power at relatively low cost, the possibilities of what our data can tell us about our networks are becoming more apparent.

The timing couldn’t have been more appropriate since traditional perimeter-based IDS\IPS no longer meet the demands of modern networks that span vast geographical areas with multiple entry points.

In the age of cloud, mobility, ubiquitous Internet and the ever-expanding enterprise environment, big data capabilities will and should become an intrinsic part of virtually every security apparatus.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health

How to counter-punch botnets, viruses, ToR & more with Netflow [Pt 1]

You can’t secure what you can’t see and you don’t know what you don’t know.

Many network and security professionals assume that they can simply analyze data captured using their standard security devices like firewalls and intrusion detection systems, however they quickly discover their limitations as these devices are not designed for and cannot record and report on every transaction due to lack of granularity, scalability and historic data retention. Network devices like routers, switches, Wi-Fi or VMware servers also typically lack any sophisticated anti-virus software.

Presenting information in a manner that quickly enables security teams to act with simple views with deep contextual data supporting the summaries is the mark of a well constructed traffic analyzer ensuring teams are not bogged down by the detail unless required and even then allowing elegant means to extract forensics with simple but powerful visuals to enable quick contextual grasp and impact of a security event.

Using NetFlow Correlation to Detect intrusions  

Host Reputation is one of the best detection methods that can be used against Advanced Persistent Threats. There are many data sources to choose from and some are more comprehensive than others.

Today these blacklists are mostly IPv4 and Domain orientated designed to be used primarily by firewalls, network intrusion systems and antivirus software.

They can also be used in NetFlow systems very successfully as long as the selected flow technology can scale to support the thousands of known compromised end-points, the ability to frequently update the threat data and the ability to record the full detail of every compromised flow and subsequent conversations that communicate with the compromised systems to discover other related breaches that may have occurred or are occurring.

According to Mike Schiffman at Cisco,

“If a given IP address is known to be that of a spammer or a part of a botnet army it can be flagged in one of the ill repute databases … Since these databases are all keyed on IP address, NetFlow data can be correlated against them and subsequent malicious traffic patterns can be observed, blocked, or flagged for further action. This is NetFlow Correlation.“

The kind of data can we expect to find in the reputation databases are IP addresses that have known to be acting in some malicious or negative manner such as being seen by multiple global honeypots. Some have been identified to be part of a well-known botnet such as Palevo or Zeus whilst other IP’s are known to have been distributing Malware or Trojans. Many kinds of lists can be useful to correlate such as known ToR end points or Relays that have become particularly risky of late being a common means to introduce RansomWare and should certainly not be seen conversing to any host within a corporate, government or other sensitive environment.

Using a tool like CySight’s advanced End-Point Threat Detection allows NetFlow data to be correlated against hundreds of thousands of IP addresses of questionable reputation including ToR exits and relays in real-time with comprehensive historical forensics that can be deployed in a massively parallel architecture.

As a trusted source of deep network insights built on big data analysis capabilities, Netflow provides NOCs with an end-to-end security and performance monitoring and management solution. For more information on Netflow as a performance and security solution for large-scale environments, download our free Guide to Understanding Netflow.

Cutting-edge and innovative technologies like CySight delivers the deep end-to-end network visibility and security context required assisting in speedily impeding harmful attacks.

Performance Monitoring & Security Forensics: The 1-2 Punch for Network and IT Infrastructure Visibility

Microsoft Nobelium Hack

Solarwinds Hackers Strike Again

Another painful round of cyber-attacks carried out by what Microsoft discovered to be a Russian state-sponsored hacking group called Nobelium, this time attacking Microsoft support agent’s computer, exposing customer’s subscription information. 

The activity tracked by Microsoft led to Nobelium, the same group that executed the solarwinds orion hack last year December 2020. The attack was first discovered when an “information-stealing malware” on one of Microsoft’s customer support agent’s machine was detected by Microsoft themselves. Infiltration occurred using password spraying and brute force attacks, attempting to gain access to the Microsoft accounts.

Microsoft said Nobelium had targeted over 150 organizations worldwide in the last week, including government agencies, think tanks, consultants, and nongovernmental organizations, reaching over 3000 email accounts mostly in the USA but also present in at least 24 other countries. This event is said to be an “active incident”, meaning this attack is very much Live and more has yet to be discovered. Microsoft is attempting to notify all who are affected.

The attack carried out was done through an email marketing account belonging to the U.S Agency for International Development. Recipients of the email received a phishing email that looked authentic but contained a malicious file inserted into a link. Once the file was downloaded, the machine is compromised and a back door is created, enabling the bad actor to steal data along with infecting other machines on the network.

In April this year, the Biden administration pointed the finger at the Russian Foreign Intelligence Service (SVR) for being responsible for the solarwinds attack, exposing the Nobelium group. It appears that this exposure led the group to drop their stealth approach they have been using for months and on May 25 they ran a “spear phishing” campaign, causing a zero-day vulnerability.

Nobelium Phishing Attack

Staying in Control of your Network

IdeaData’s Marketing Manager, Tomare Curran, stated on the matter, “These kinds of threats can hide and go unnoticed for years until the botnet master decides to activate the malware. Therefore, it’s imperative to maintain flow metadata records of every transaction so that when a threat finally comes to light you can set Netflow Auditor’s HindSight Threat Analyzer to search back and help you find out if or when you were compromised and what else could have been impacted.”

NetFlow Auditor constantly keeps its eyes on your Network and provides total visibility to quickly identify and alert on who is doing What, Where, When, with Whom and for How Long right now or months ago. It baselines your network to discover unusual network behaviors and using machine learning and A.I. diagnostics will provide early warning on anomalous communications.

Cyber security experts at IdeaData do not believe the group will stop their operations due to being exposed. IdeaData is offering Netflow Auditor’s Integrated Cyber Threat Intelligence solution free for 60 days to allow companies to help cleanse their network from newly identified threats.

Have any questions?

Contact us at:  tomare.curran@netflowauditor.com

How to Improve Cyber Security with Advanced Netflow Network Forensics

Most organizations today deploy network security tools that are built to perform limited prevention – traditionally “blocking and tackling” at the edge of a network using a firewall or by installing security software on every system.

This is only one third of a security solution, and has become the least effective measure.

The growing complexity of the IT infrastructure is the major challenge faced by existing network security tools. The major forces impacting current network security tools are the rising level of sophistication of cybercrimes, growing compliance and regulatory mandates, expanding virtualization of servers and the constant need for visibility compounded by ever-increasing data volumes. Larger networks involve enormous amounts of data, into which the incident teams must have a high degree of visibility for analysis and reporting purposes.

An organization’s network and security teams are faced with increasing complexities, including network convergence, increased data and flow volumes, intensifying security threats, government compliance issues, rising costs and network performance demands.

With network visibility and traceability also top priorities, companies must look to security network forensics to gain insight and uncover issues. The speed with which an organization can identify, diagnose, analyze, and respond to an incident will limit the damage and lower the cost of recovery.

Analysts are better positioned to mitigate risk to the network and its data through security focused network forensics applied at the granular level. Only with sufficient granularity and historic visibility and tools that are able to machine learn from the network Big Data can the risk of an anomaly be properly diagnosed and mitigated.

Doing so helps staff identify breaches that occur in real-time, as well as Insider threats and data leaks that take place over a prolonged period. Insider threats are one of the most difficult to detect and are missed by most security tools.

Many network and security professionals assume that they can simply analyze data captured using their standard security devices like firewalls and intrusion detection systems, however they quickly discover limitations as these devices are not designed for and cannot record and report on every transaction due to lack of deep visibility, scalability and historic data retention making old fashioned network forensic reporting expensive and impractical.

NetFlow is an analytics software technology that enables IT departments to accurately audit network data and host-level activity. It enhances network security and performance making it easy to identify suspicious user behaviors to protect your entire infrastructure.

A well-designed NetFlow forensic tool should include powerful features that can allow for:

  • Micro-level data recording to assist in identification of real-time breaches and data leaks;
  • Event notifications and alerts for network administrators when irregular traffic movements are detected;
  • Tools that highlight trends and baselines, so IT staff can provision services accordingly;
  • Tools that learn normal behavior, so Network Security staff can quickly detect and mitigate threats;
  • Capture highly granular traffic over time to enable deep visibility across the entire network infrastructure;
  • 24-7 automation, flexible reporting processes to deliver usable business intelligence and security forensics specifically for those analytics that can take a long time to produce.

Forensic analysts require both high-level and detailed visibility through aggregating, division and drilldown algorithms such as:

  • Deviation / Outlier analysis
  • Bi-directional analysis
  • Cross section analysis
  • Top X/Y analysis
  • Dissemination analysis
  • Custom Group analysis
  • Baselining analysis
  • Percentile analysis
  • QoS analysis
  • Packet Size analysis
  • Count analysis
  • Latency and RTT analysis

Further when integrated with a visual analytics process it will enable additional insights to the forensic professional when analyzing subsets of the flow data surrounding an event.

In some ways it needs to act as a log analyzer, security information and event management (SIEM) and a network behavior anomaly and threat detector all rolled into one.

The ultimate goal is to deploy a multi-faceted flow-analytics solution that can compliment your business by providing extreme visibility and eliminating network blindspots, both in your physical infrastructure and in the cloud, automatically detecting and diagnosing your entire network for anomalous traffic and improving your mean time to detect and repair.

Performance Monitoring & Security Forensics: The 1-2 Punch for Network and IT Infrastructure Visibility

NetFlow for Advanced Threat Detection

These networks are vital assets to the business and require absolute protection against unauthorized access, malicious programs, and degradation of performance of the network. It is no longer enough to only use Anti-Virus applications.

By the time malware is detected and those signatures added to the antiviral definitions, access is obtained and havoc wreaked or the malware is buried itself inside the network and is obtaining data and passwords for later exploitation.

An article by Drew Robb in eSecurity Planet on September 3, 2015 (https://www.esecurityplanet.com/network-security/advanced-threat-detection-buying-guide-1.html) cited the Verizon 2015 Data Breach Investigations Report where 70 respondents reported over 80,000 security incidents which led to more than 2000 serious breaches in one year.

The report noted that phishing is commonly used to gain access and the malware  then accumulates passwords and account numbers and learns the security defenses before launching an attack.  A telling remark was made, “It is abundantly clear that traditional security solutions are increasingly ineffectual and that vendor assurances are often empty promises,” said Charles King, an analyst at Pund-IT. “Passive security practices like setting and maintaining defensive security perimeters simply don’t work against highly aggressive and adaptable threat sources, including criminal organizations and rogue states.”

So what can businesses do to protect themselves? How can they be proactive in addition to the passive perimeter defenses?

The very first line of defense is better education of users. In one test, an e-mail message was sent to the users, purportedly from the IT department, asking for their passwords in order to “upgrade security.” While 52 people asked the IT department if this was a real request, 110 mailed their passwords right back. In their attempts to be productive, over half of the recipients of phishing e-mails responded within an hour!

Another method of advanced threat protection is NetFlow Monitoring.

IT department and Managed service providers (MSP’s), can use monitoring capabilities to detect, prevent, and report adverse effects on the network.

Traffic monitoring, for example, watches the flow of information and data traversing critical nodes and network links. Without using intrusive probes, this information helps decipher how applications are using the network and which ones are becoming bandwidth hogs. These are then investigated further to determine what is causing the problem and how best to manage the issue. Just adding more bandwidth is not the answer!

IT departments review this data to investigate which personnel are the power users of which applications, when the peak traffic times are and why, and similar information in addition to flagging and diving in-depth to review anomalies that indicate a potential problem.

If there are critical applications or services that the clients rely on for key account revenue streams, IT can provide real-time monitoring and display of the health of the networks supporting those applications and services. It is this ability to observe, analyze, and report on the network health and patterns of usage that provides the ability to make better decisions at the speed of business that CIO’s crave.

CySight excels at network Predictive AI Baselining analytics solutions. It scales to collect, analyze, and report on Netflow datastreams of over one million flows/second. Their team of specialists have prepped, installed, and deployed over 1000 CySight performance monitoring solutions, including over 50 Fortune 1000 companies and some of the largest ISP/Telco’s in the world. A global leader and recognized by winning awards for Security and Business Intelligence at the World Congress of IT, CySight is also welcomed by Cisco as a Technology Development Partner.

8 Keys to Understanding NetFlow for Network Security, Performance & Overall IT Health