Difficulty: medium
Estimated Time: 20 minutes

Sysdig Falco is an open source project for intrusion and abnormality detection for Cloud Native platforms such as Kubernetes, Mesosphere, and Cloud Foundry.

It can detect abnormal application behavior, and alert via Slack, Fluentd, NATS, and more.

It can also protect your platform by taking action through serverless (FaaS) frameworks, or other automation.

If you have not done it yet, it's a good idea to complete the Sysdig Falco: Container security monitoring scenario before this one.

In this lab you will learn the basics of Sysdig Falco and how to use it along with a Kubernetes cluster to detect anomalous behavior.

This scenario will cover the following security threats:

  • Unauthorized process
  • Write to non authorized directory
  • Processes opening unexpected connections to the Internet

You will play both the attacker and defender (sysadmin) roles, verifying that the intrusion attempt has been detected by Sysdig Falco.

Based on Sysdig Blog articles: https://sysdig.com/blog/.

In this course you experimented with the basics of Sysdig Falco and its operation on a Kubernetes cluster. You learned how to trigger alerts using Kubernetes-specific metadata.

This time we just used a simple file output, but you can also configure a custom programmatic output to send notifications to event and alerting systems in your organization.

Eager to learn more? These are some recommended further steps:

Practical example of Kubernetes runtime security with Falco

Step 1 of 6

Setting up the environment

We have already set up a Kubernetes cluster just for you.
On the right you can see the terminal of the master node, from which you can interact with the cluster using the kubectl tool, which is already configured.

For instance, you can get the details of the cluster executing kubectl cluster-info

You can view the nodes in the cluster with the command kubectl get nodes

You should see 2 nodes: one master and a worker.

Check that you are admin: kubectl auth can-i create node

You can view the current status of our cluster using the command kubectl get pod -n kube-system

Make sure that all the pods are in Running state. Othewise, wait a few moments and try again.