mupuf.org // we are octopimupuf.org

Setting Up a CI System Part 2: Generating and Deploying Your Test Environment

This article is part of a series on how to setup a bare-metal CI system for Linux driver development. Check out part 1 where we expose the context/high-level principles of the whole CI system, and make the machine fully controllable remotely (power on, OS to boot, keyboard/screen emulation using a serial console).

In this article, we will start demystifying the boot process, and discuss about different ways to generate and boot an OS image along with a kernel for your machine. Finally, we will introduce boot2container, a project that makes running containers on bare metal a breeze!

This work is sponsored by the Valve Corporation.

Setting Up a CI System Part 1: Preparing Your Test Machines

Under contracting work for Valve Corporation, I have been working with Charlie Turner and Andres Gomez from Igalia to develop a CI test farm for driver testing (most graphics).

This is now the fifth CI system I have worked with / on, and I am growing tired of not being able to re-use components from the previous systems due to how deeply-integrated its components are, and how implementation details permeate from one component to another. Additionally, such designs limit the ability of the system to grow, as updating a component would impact a lot of components, making it difficult or even impossible to do without a rewrite of the system, or taking the system down for multiple hours.

With this new system, I am putting emphasis on designing good interfaces between components in order to create an open source toolbox that CI systems can re-use freely and tailor to their needs, while not painting themselves in a corner.

I aim to blog about all the different components/interfaces we will be making for this test system, but in this article, I would like to start with the basics: proposing design goals, and setting up a machine to be controllable remotely by a test system.

Final Week at Intel, Moving on to Being a Self-employed Contractor

This week was my last week at Intel after over 5.5 years there. My journey at Intel has been really interesting, going from Mesa development to Continuous Integration / Validation, then joining the i915 display team and realizing my vision of production-ready upstream drivers through the creation of the Intel GFX CI system! Finally, my last year at Intel was as the CI/Tooling Architect for the validation organization. There, I was writing tools and processes to improve program management and validation rather than just focusing on developers like I used to. This taught me quite a bit about managerial structures, and organizations in general, but kept on pushing me on a narrower and narrower type of work which left me longing for the days where I could go and hack on any codebase and directly collaborate with other engineers, no matter where they are.

This opportunity came to me in the form of becoming a self-employed contractor, hired by Valve. I am expecting to be work throughout the stack on improving Linux as a gaming platform, and be strengthening a fantastic team of engineers who, despite being a community effort, contributed to deliver arguably one of the best vulkan driver in the industry (Radv with ACO). This definitely brings me back to my Nouveau days (minus the power management issues) but this time I will come with a lot more experience, especially around testing and Windowing System Integration!

I am very thankful for everything I learnt at Intel, contributing to improve the quality of the drivers, and considering world-class talents as being my colleagues and friends. However, unlike traditional companies where moving to another one means changing projects and not interacting with the same people again, open source drivers trancends companies, so I know that we will still be working together one way or another!

So long, and thanks for all the fish!

3 Ways of Hosting a Live-streamed Conference Using Jitsi

A bit over a week ago, I finished hosting the virtual X.Org Developer Conference (XDC) 2020 with my friend Arkadiusz Hiler (AKA Arek / ivyl). This conference has been livestreamed every single year since 2012, but this was the first time that we went fully-virtual and needed to have hosts / speakers present from their homes.

Of course, the XDC was not the only conference this year that needed to become fully virtual, so we have been lucky-enough to learn from them (thanks to LWN for its article on LPC2020 and Guy Lunardi), and this blog post is my attempt at sharing some of the knowledge we acquired by running XDC 2020.

Over this blog post, I will answer the questions how we selected jitsi over other video-conferencing solutions, how to deploy it, then present 3 different ways to use it for live-streaming your conference. Let’s get to it, shall we?

FPGA: Why So Few Open Source Drivers for Open Hardware?

Field-Programmable Gate Arrays (FPGA) have been an interest of mine for well over a decade now. Being able to generate complex signals in the tens of MHz range with nanosecond accuracy, dealing with fast data streams, and doing all of this at a fraction of the power consumption of fast CPUs, they really have a lot of potential for fun. However, their prohibitive cost, proprietary toolchains (some running only on Windows), and the insanely-long bitstream generation made them look more like a curiosity to me rather than a practical solution. Finally, writing verilog / VHDL directly felt like the equivalent of writing an OS in assembly and thus felt more like torture than fun for the young C/C++ developer that I was. Little did I know that 10+ years later, I would find HW development to be the most amazing thing ever!

The first thing that changed is that I got involved in reverse engineering NVIDIA GPUs’ power management in order to write an open source driver, writing in a reverse-engineed assembly to implement automatic power management for this driver, creating my own smart wireless modems which detects the PHY parameters of incoming transmissions on the fly (modulation, center frequency) by using software-defined radio, and having fun with arduinos, single-board computers, and designing my custom PCBs.

The second thing that changed is that Moore’s law has grinded to a halt, leading to a more architecture-centric instead of a fab-oriented world. This reduced the advantage ASICs had on FPGAs, by creating a software eco-system that is more geared towards parallelism rather than high-frequency single-thread performance.

Finally, FPGAs along with their community have gotten a whole lot more attractive! From the FPGAs themselves to their toolchains, let’s review what changed, and then ask ourselves why this has not translated to upstream Linux drivers for FPGA-based open source designs.

Xf86-video-modesetting: Tear-free Desktops for All!

We have all had this bad experience. You are watching a video of your favorite show or playing your favourite game, and a jumpy horizontal (and/or diagonal) line breaks your immersion and reminds you that this all fiction. This effect is usually called tearing, and you can see an example of this in the following video (already visible in the thumbnail):

Another issue that some users have been hitting is not being able to have three 4k displays set horizontally. In this blog post, I will explain how I managed to kill these two birds with my per-CRTC framebuffer stone.

Nura Headphones on Linux

Tl;dr: Quirk for the USB mode is on the way for fixing the problem upstream, force a sampling rate of 48kHz to get sound out in the mean time

I received a couple of days ago my nuraphones which I backed on Kickstarter some time ago. So far, I really like the sound quality and they sound a bit better than my monitoring loud speakers. I really like in-ear monitors, so this headset is no issue for me, on the contrary!

Since I am exclusively a Linux user, I wanted to get things working on my work’s PC and my Sailfish OS X. I had no issue with bluetooth on my phone and Desktop PC (instructions), but the tethered mode was not on either platforms… The sound card would be recognized but no sounds coming out…

Beating Outdated Software, the Cancer of Smart Devices

Foreword: This article has originally been written for the Interdisciplinary Journal of the Environment Tvergastein, and has been published in its 9th edition. Thanks to the journal’s commitee for allowing me to re-post it on my blog (great for search engines), but definitely bad for the styling… Finally, I would like to thank Outi Pitkänen for motivating me to write this article, reviewing it countless times and pushing me to make it as accessible as possible!

Our society relies more and more on smart devices to ease communication and to be more efficient. Smart devices are transforming both industries and personal lives. Smart and self-organising wide-area sensor networks are now used to increase the efficiency of farms, cities, supply chains or power grids. Because they are always connected to the Internet, they can constantly and accurately monitor assets and help deliver what is required precisely when and where it is needed. Also the general public has seen the transition to smart devices, cell phones being switched to smartphones, TVs to smart-TVs and cars to semi-autonomous cars.

Life in Finland

Hey everyone, long time no sign of life!

I have been quite busy at Intel, helping here and there on mesa, the kernel or the X-server. I have however recently been focusing on the testing side of the Graphics Stack and got my testing project on Freedesktop (EzBench) which I also presented at XDC2015(LWN recap), FOSDEM 2016 and XDC2016 (which I organized in Helsinki with Tuomo Ryynänen from Haaga-Helia Pasila).