Archives de catégorie : kubernetes

Rejekts north america conference 2019 notes

On Saturday 16th and Sunday 17th of November 2019, just before Kubecon NA 2019 (you can read my notes for this event as well), I could attend and even quickly talk at (about Helm repository hosting on githubpages) Rejekts, the conference recycling the talks submitted (or not) to Kubecon CFP but not « chosen ».

French Canadian k8s super stars present at Rejekts ;-)

It was a rather small audience, less than a hundred, but still, choice and content were there: on 2 tracks during 2 days serious Kubernetes talks were presented; here are my notes

Dynamic provisioning of k8s local pv by Mayadata

MayaData is the maintainer of OpenEBS.

HostPathVolume: where it started out with Kubernetes and local volumes. According to the documentation, it was not supposed to be used often; but in reality: it’s very much used.Why? because performance and reliability (no network)

PodSecurityPolicy: can disable the hostpath, or activate read only features on hostpath – it can mitigate too wide access to the host filesystem.

A frequent issue with hostpath, is the read/write permissions for the container reading/ writing to it; but also hostpath is not really accounted by k8s (what are the pods using this given hostpath?)

LocalPV: it’s the separation of concerns: developer (running the pods) vs admin (setting up the volumes hostpaths)

LocalPV advantage: K8s knows what node a given PV is attached to.

So if LocalPV is great, what are the missing parts?

  • volumes can’t move between pods,
  • disk preparation not handled by k8s, but by the user

What’s coming in the future ?

  • SIGStorage group working on: dynamic provisioning of local volumes, using LVM (under design, alpha implementation coming soon)
  • LocalPV + CSI : merged

How does OpenEBS help?

  • Dynamic local device (no need to set path);
  • possibility to use sub directories of a define authorized path
  • ZFS support and configuration is also available

Question: back to K8s oss : NodeDiskManager is OSS, there is some overlap with LVM

Flatcar container, the CoreOS legacy by Kinvolk

How to secure the internet?

  • Reduce attack surface
  • immutable filesystem
  • automatic updates
  • secure container runtime (Docker then rkt that lead to OCI and then containerD)
  • Principle of least privilege

GIFEE: Google’s infrastructure for everyone else (by Alex Polvi)

Tectonic kubernetes distribution lead to Typhoon and then lokomotive

CoreOS Container lInux gave birth to FlatCar container linux, a Linux distribution

It manages 4 distribution channels: Alpha, Beta, Stable and Edge; it is publicly available and also updates are available.

UpdateService is the child of CoreOS Update Service, but OSS this time (manages fleet upgrade policy, like monitoring which nodes were upgraded or not)

You can actually migrate from CoreOS ContainerLinux to FlatCar

Future plans?

  • Update kernel to 5.x
  • User eBPF tooling support
  • cgroups v2
  • better debugging support (kdump support)

Why you shouldn’t build an operator by Josh Wood from Redhat

Slides are available online.

What is the name about? a person, a software?

Instead of dealing with pods directly, an operator allows you to declare what the goal is, via a higher level interface.

What it comes down to is: « kubernetes, go left, go right » (using k8s manifests) vs « kubernetes take me home »

So why not? security perspective: when you install an operator, you install a CRD – and every time you add /remove types, you’re doing admin level.

Also, actionsconsistency: clusters won’t behave the same from now on; depending on what operators are installed in them.

CRDs are globally defined and not particular to a given namespaces

#1 reason: if you don’t need it, just don’t do it!
Operator Lifecycle Manager can allow you to get the newest features from the last version of your operator.

OperatorHub.io is opened to everyone; some commercial support is available.

Code Fast and Test Accurately Without Kubectl by Tilt.dev

The developer feedback loop : used to be: write code source/test; compile, deploy and then review.

But now, with images, we need to figure out what to tag, when, where to deploy, use several tools such as Docker, kubernetes, helm, etc.

Three ways tilt help:

  • automates dev. workflow (build, test, push, deploy)
  • optimizes the feedback time by live updating containers
  • handles complexity with its API

tilt.dev: a tiltfile allows you to define how to deploy several containers and how to orchestrate them, tilt automatically redeploy all the k8s manifest for you when you update your code or deployment manifests.

It really reminded me of Skaffold from GoogleCloud, or Draft from Azure – I asked the question to Dan Bentley, and he stated that the user experience is better, thanks to Tilt UI, and that it was extensible, being able to integrate with any kind of build.

LinkerD developers use Tilt, and the demo was about how they use Tilt at Linkerd.

Cloud Functions meets Microservices: Running Framework based Functions on Knative, by Chris Bailey from IBM

What’s a serverless, is FaaS equals to serverless?

FaaS = Serverless + Functions

  • What already exists currently for serverless ?
  • OpenWhisk for example, 1 endpoint, 1 user, 1 container – could not scale that well
  • Knative: multiple endpoints, multiple requests, 1 container

What are people doing with serverless ? vast majority: REST APIs

Introducing Appsody, an OSS framework to deploy on knative or plain k8s

Local dev. can work with an image downloading the env. where you can run, test, debug – then you can commit, it triggers a tekton pipeline, that will build your function into a docker image.

Question: Compared to buildpacks: buildpacks does not ship with observability built in (metrics, tracing) compared to appsody that includes it by default, in a consistent away across different runtimes (java, JS, Python)