Today started out with "Continuous Integration and the Glorious Future". Tim kicked it off with some CI history - dev, dev, dev, then integration. That didn't work to well. Provided some nice perspective that I hadn't had before. Tim also provided a current state of the union with Fedora automation and items that are in progress including build automations, build self-tests, and automated deployments. Some of the items that need work are presentation of data and results, keeping the builds fast. More great perspective on the feedback loop and what he wants out of it: how long after package is updated can a new compose be generated, how long after compose is built until the tests are run. How long after the tests are run untl the developer is notified of success or failure. The QA team is also evaluating how to enable contributors to write thier own automated tests. Nonstop Fedora. Tim covered quite a bit more on the Why and How during his presentation. Great presentation.
Next up was "Modularity: Why, where we are, and how to get involved" by +Langdon White. Langdon kicked off by covering some history which dated back to the "Rings Proposal". starting from "JeOS" which would be highly curated to the outer rings which are no so curated. He provided some great analogies about how a one size doesn't fit all - comparing to the lifecycle of packages and how they don't align with other packages. Then he moved into modules:
- A module is a thing that's managed as a logical unit.
- A module is a thing that promises an external, unchanging API
- A module is a thing that may have many, unexposed binary artifacts to support the external API
- A module may "contain" other modules and is referred to as a "module stack"
The process: inputs -> activities -> outputs -> outcomes -> impact.
We saw an example of a module input file which explained references, profiles, components, and filters.
Progress thus far is an established Modularity WG, implemented a dnf plugin, implemented an alpha version of module build pipeline, ability to coalesce modules for testing, and kicked off a base-runtime.
Langdon then did a successful demos of:
- Searching for kernel "Fedora modules" and installing that module.
- Web server demo which really focused on how profiles are used.
- LAMP stack demo which showcased deploying php 5.6 and then using modules to move to
My takeaway from this is that it's promising, new, raw. I would encourage people who are interested to join the weekly modularity group meetings to keep up to speed on this fast developing tech. There was a ton of items I couldn't capture here because I was busy listening...
After Langdons talk, I attended the "Nulecule - Packaging multi container applications". Ratnadeep talked about the issues of "legacy" container creation / configuration / distirbutoin and how Nulecule specification helps solve this. Ratnadeep walktd through a gitlab example which has many distributed services that are needed to stand it up. Gitlab should be decomposed and placed into multiple containers per service. The solution that the Nulecule specification provides is the distribution of metadata that can describe this decomposed service and make available to multiple backends. The implementation of the Nulecule specification is Atomic App. Application images that Atomic App generates are artifacts of the input / answer files that you pass to Atomic App. Aside from Docker, you don't need anything else on your host in order to get started with Nulecule / Atomic App. Ratnadeep closed out with live demos of: wordpress on Docker, then wordpress on Marathon, and wordpress on +Kubernetes. Finally, Ratnadeep demo'd a new feature of Atomic App called "index". He showed how to query the existing index on github and also how to generate your own local index. This was a great presentation.
Then Patrick Uiterwijk presented "Using Fedora Atomic as Workstation". He kicked off by showing his workstation is and has been running Atomic since January. Some of the limitations are that there are no workstation trees and adding packages can be tricky.
Patrick got started by creating a custom tree, deploying that tree and provisioning it. There are decisions behind each step:What's the initial package set look like? OS version? Delivery mechanism? Where does the compose machine live? It's really cool to sit back and see what clever guys like Patrick are doing and the problems they are solving. This was initially a pet project of his to do nothing more than satisify his curiosity - now he's presenting the work at Flock!
"Testing Bleeding edge Kernels" by Paul Moore. Paul started the talk by discussing the the kernel development life cycle. He also did great job keeping the presentation to the process and sharing his direct experiences that he's had. You can apply the same concepts that Paul mentioned for kernel development to any software project. I found it particularly interesting to see how he solved some of his challenges for simplified offline capable CI. Which of course is all on github: https://github.com/pcmoore/copr-pkg_scripts. Very insightful presentation.
Last session of the day was "Continuous security management via OpenSCAP Daemon" by Jan Cerny. I'm interested in this one as it's an extremely important topic that affects the entire lifecycle of an image. Jan kicked off by discussing what makes systems secure, vulnerabilitiy assessments, and known / unknown vulnerabilities. Security compliance and guidelines vary from organization to organization. Some common weaknesses include enabling telnet, ftp, disabling SELinux, open firewall. The thing is, there are several things to check - and it's not reasonable to perform these tasks manually. That's where OpenSCAP comes in.
SCAP = Security Content Automation Protocol
OpenSCAP can translate common security guidelines like digs, etc... into OpenSCAP documents. OpenSCAP also has a security guide that provides guidance.
There are a few different ways to run OpenSCAP. Jan recommends using oscap-workbench for noobs like me. So, I installed it during his talk. It's easy to use. Writes out a log file in HTML format that you can review when it's finished. I had a couple of failures :( Will have to follow up on that. By default it ran 74 rules. Looks comprehensive to me. The way I ran it was manual. They now have OpenSCAP DAEMON which can do continuous security management. Other capabilities, offline scanning of VMs, remote machines over SSH, +Docker images and containers, and local scans. Jan also demoed creating a task with oscapd-cli in interactive mode.
Jan then demo'd running a scan on a container by using "atomic scan". This kicked off a OpenSCAP container and scanned the container ID that was passed to it. OpenSCAN leverages the offline scanning capability and can detect the OS that's in the container. You can also scan Docker images. Another great presentation.
Closing out the day by taking a cruise here in Krakow. They keep us busy....
No comments:
Post a Comment