Who’s Afraid of Compliance? The importance of SIEM

The move to the cloud creates massive changes in the management of infrastructure and computing environments; With them, SIEMs evolve and are offered as a service (SaaS) for example by Microsoft Sentinel, IBM Qradar on Cloud, Splunk and others.

So why would you want a cloud SIEM?

In traditional SIEMs, which you manually install on-premises, you need to manage updates manually, schedule downtime to apply updates, assign resources to the server that will host SIEM, manage all settings related to the network, write all the logic for the rules by yourself and find a workaround to connect the logs to the system and waste a lot of time analyzing the logs.

In cloud SIEM you don’t have to do all of that, you just subscribe to a SaaS solution managed for you by the vendor. Yes, you will need to configure the logs to send to SIEM but it is much easier on the cloud compared to on-premises. Usually, companies that provide SIEM write analyzers / modules for third party services that need to be monitored or just provide integrated solutions with a few sample rules to start, which greatly reduces the complexity of configurations.

This is a huge benefit for cloud SIEM, as it greatly eases all management and allows analysts to focus on collecting events, thinking about new logics for alerts, proactive monitoring by viewing logs and finding anomalies, automating writing for incidents, and more.

Is it beneficial for everyone?

Usually the answer is YES! Especially if you have a cloud environment or need to comply with different regulations as it can bring a lot of flexibility and make it easier to manage. Usually, cloud-based SIEMs offer out-of-the-box integration with many other cloud tools such as threat intelligence, compliance platforms, DevOps and CI / CD tools, and vulnerability assessments. All of this helps us create an advanced security monitoring ecosystem that gives us great flexibility to monitor and detect threat actors in different phases of potential attacks. Take, for example, one of the advanced cloud SIEMs, Microsoft Sentinel.

Sentinel is a SIEM + SOAR cloud platform offered by Microsoft. Sentinel is in the Azure cloud, but it’s also great for monitoring any other cloud, not just Azure. It has all the capabilities of a SIEM, including automation (also known as SOAR). Sentinel is on an Azure Log Analytics workspace, which means all logs and data are aggregated into the workspace and Sentinel just takes the data from there. All data connectors (means of collecting system logs) will send data to log analytics via its API, then the solutions are provided by Microsoft (they are quite fast with that) which simplifies the integration. It’s literally 2 clicks away! In case there is no integration, we can of course write one with Azure Functions (serverless applications), or even Graph API. Sentinel also has an active GitHub community that is constantly writing new rules, integrations, playbooks, and dashboards, to try and improve Sentinel. Sentinel also has automatic updates; every month new features and integrations are introduced – for example, last month the integrated integration with Amazon S3 was made possible. New rules are also introduced, which allow monitoring of specific alerts with just one click. Suppose you want to automate an action for an alert, for that you have Azure Logic Apps which are integrated with Azure Sentinel and make the automation very easy.

Sentinel also has threat research capabilities, as it contains many built-in queries that analysts can run and analyze for any suspicious behavior or anomalies. You can add new queries yourself using Kusto Query Language (KQL) which queries the log analysis.

Does Microsoft Sentinel only support Azure products?

No, it supports most types of public cloud providers like AWS, GCP and others …

Suppose you want to monitor the AWS Cloud track events, this integration is built in and provided by Microsoft – all we need in this case is to allow the API to access AWS by assuming a role and extract the events to save the analytical workspace. Everything is automatic. And the logs are automatically analyzed in the log analysis. But let’s say you want to create a new field from the crawl logs, this is also possible and quite simple with the extension function of KQL with the parse_json or extract_regex functions. There are a few built-in rules, provided by Microsoft, which is a good start, but then we can add our own rules by writing a query using KQL with whatever logic we want. And that’s all ! Consider another case where the integration is not integrated. For example, integration with Duo. In this integration, we can use the already created API function that is in GitHub and manually import it into our environment by creating a specific Azure function for it. The function will use the API of DUO to get the logs and send them to the Log Analysis workspace. From there it’s the same process we explained before, but we don’t have any built-in rules, we have to create them all from scratch.

By Alex Shpilevoy, Cloud Security Specialist, and Dima Tatur, Head of Cybersecurity and RSSI Division, at Commit

Source link

Previous Former Australian drummer Michael Hussey wants England to be inspired by India
Next Building inspiration on social platforms with Joél Leon