Das Bityard - Linux/2023-12-10T00:49:37-05:00Various Ways of Sending Mail via SMTP2023-01-30T00:00:00-05:002023-12-10T00:49:37-05:00Charlestag:None,2023-01-30:articles/2023/January/various-ways-of-sending-mail-via-smtp.html<p>Internet Mail, or email, or whatever kids these days call it, was one of those things that terrified me very early on when I was a strapping young System Administrator. Everything else that I was doing at the time seemed comparitively easy: Linux/BSD installs, system setup, automation, and such …</p><p>Internet Mail, or email, or whatever kids these days call it, was one of those things that terrified me very early on when I was a strapping young System Administrator. Everything else that I was doing at the time seemed comparitively easy: Linux/BSD installs, system setup, automation, and such. Learning how various Unix shells and relational databases worked was a joy. But mail server administration... now <em>that</em> scared the hell out of me.</p>
<p>E-mail was and still is a complicated, fragile system. You can do everything right and <em>still</em> end up arse-deep in alligators due to someone else's mistake or bad hair day. There's just so much that can go wrong. To run a successful mail server means that you have to--at a bare minimum--concern yourself with such trivialities as:</p>
<ol>
<li>Getting <em>all</em> the DNS records exactly just so.</li>
<li>Make sure you start with a "clean" IP address... and keep it that way.</li>
<li>Set up user accounts and authentication.</li>
<li>Know how to configure the SMTP server.</li>
<li>Know how to configure the POP/IMAP server.</li>
<li>Oh yes, and most importantly: don't let the mail server become a spammer's playground.</li>
</ol>
<p>One of my first jobs was at a managed web hosting provider. Back then, if you wanted to become an expert on Apache, PHP, and email, then working the phones at a company like this was the quickest path "grizzled veteran" status. It's safe to say I learned me some email at that job.</p>
<figure>
<img src="images/smtp/exim.png">
</figure>
<p>I'm pretty comfortable with mail administration and troubleshooting nowadays. Heck, I even host my own personal mail server. Not out of necessity or anything, mostly just to annoy people on Reddit and HN who say it's impossible. My own setup is pretty stable and very rarely needs any attention. But either at work or at home, I sometimes find myself needing to troubleshoot occasional mail-related issues.</p>
<p>When The Mail Doth Not Flow, one of the most basic things you find yourself doing is sending test messages. Sometimes from systems that don't even have a proper mail server, client, or relay. For better or worse, it turns out that the "simple" in <a href="https://en.wikipedia.org/wiki/Simple_Mail_Transfer_Protocol">SMTP</a> is not as much of a lie as "lightweight" in <a href="https://en.wikipedia.org/wiki/Lightweight_Directory_Access_Protocol">LDAP</a>, and you don't often need a lot of ceremony just to fire off a simple message or two for testing or notifications from a barebones system. This article describes a few methods for doing so.</p>
<p><strong>Important:</strong> I'm going to use <code>example.com</code> as the domain here for illustrative purposes. This is not, in fact, a real mail server and it will never accept your messages or advances, even if you come armed with flowers and chocolate.</p>
<p>If you want to try some of these out but don't have your own mail server to fool around on, probably the best option is to fire up a Docker container or VM and install Postfix inside it. You can <em>try</em> to send mail to major mail providers using these methods but strive to contain your inevitable outrage if it doesn't work, especially if you are sending from a residential IP address.</p>
<h2>SMTP Basics</h2>
<p>It's important to know that when you (or perhaps even your mail client) send a message via SMTP, you're not just blasting a request at a server and hoping for a response, as with HTTP. Instead, SMTP more closely resembles a <em>conversation</em>. You say something, the server replies. You say another thing, the server replies again, and so on, until everything that needs to be said has been said and the discussion ends amicably. If you say something out of order, or that the mail server doesn't understand, it will act confused or just rudely hang up on you.</p>
<figure>
<img src="images/smtp/protip.png">
</figure>
<p>It's also worth pointing out early on that the SMTP standards require CR+LF line endings. (That's a carriage return character <code>0x0d</code>, followed by a linefeed character <code>0x0a</code>.) Most mail servers will happily accept stand-alone LF or (heaven forbid) CR line endings, but you shouldn't always count on that. When troubleshooting, you generally want to do things the way they are supposed to be done so as not to be lead down the garden path by your own incompetence, ask me how I know.</p>
<p>Finally, SMTP servers generally listen on TCP port 25, among others. (Other ports might mandate TLS, or refuse to continue without STARTTLS, or require authentication.)</p>
<h3>1. Greeting</h3>
<p>When you connect to an SMTP server, it will tell you its name and then wait for you to greet it with yours. The first thing you say to a mail server is almost literally, "Hello, I am (insert name here)." You greet the server and tell it the hostname of the machine you're sending mail from.<sup id="fnref:hostname"><a class="footnote-ref" href="#fn:hostname">1</a></sup> If you didn't offend it somehow, the mail server responds simply with <code>250 OK</code>. In the following example, I have connected to the mail server at <code>mail.example.com</code> and told it that my own hostname is <code>blog.bityard.net</code>:</p>
<figure>
<img src="images/smtp/helo.png">
</figure>
<p>Different mail servers reply with different text, the important bit is that the response starts with <code>250</code>. That's SMTPese for, "I don't hate you yet, let's keep talking."</p>
<p>You can also use <code>EHLO</code> instead of <code>HELO</code>. All this does is tell the server that you're a client that can handle SMTP features invented within the last 30 years or so. For the purposes of courageous troubleshooting or intrepid messing around, it doesn't really matter much which one you use but I'll be using <code>EHLO</code> from now on because it sounds more British.</p>
<h3>2. Envelope</h3>
<p>Next we say who the message is from and who the message is to.</p>
<figure>
<img src="images/smtp/envelope.png">
</figure>
<h3>3. Message</h3>
<p>If you've made it this far, there's a <em>fair</em> chance the server will accept the message <em>and</em> it might actually even deliver it. So we tell it that we're about to send the message:</p>
<figure>
<img src="images/smtp/data.png">
</figure>
<p>This means the server is ready to accept the message. Each mail message consists of two parts, the headers and the body. These must be separated by a blank line. (If you're reading this article, I'll presume you know what email headers are.) Note that the mail server is helpfully telling you how to signal the end of the message: A blank line, a dot, and another blank line. Here's an example of what to send:</p>
<figure>
<img src="images/smtp/message.png">
</figure>
<p>Note that different mail servers tend to respond to confusion in the headers in different ways. The <code>To</code> and <code>From</code> headers don't <em>always</em> have to match what you put in the envelope (this is to allow for things like mailing lists and forwarding to work), and technically a <code>Subject</code> header is optional. But a lot of things will go easier for you in life if you don't try to optimize for the smallest possible character count.</p>
<p>If the server was not terribly displeased by your inane ramblings, it accepts the message. Note that the mail server can still do whatever it wants with the message after acceptance. Up to and including:</p>
<ol>
<li>deliver it into a user's mailbox</li>
<li>forward it to another mail server</li>
<li>drop it on the floor (unceremoniously delete it)</li>
<li>broadcast it into outer space via radio signal to show alien civilizations that there is no intelligent life here</li>
</ol>
<p>If at any point you feel like you have made a sufficient fool of yourself, you can always bail out with the <code>QUIT</code> command or just close the TCP connection.</p>
<h2>The Telnet Way</h2>
<p><a href="https://en.wikipedia.org/wiki/Telnet">Telnet</a> is ostensibly its own protocol, not just some low-level TCP client. But in practice it often works as one anyway, and we can use it to manually simulate a number of other protocols such as SMTP. This is the closest some of us will ever get to being one of those super-cool computer hackers in action movies that save the day by cracking a military-grade encryption algorithm with seconds left to spare.</p>
<p>To send a message, run the <code>telnet</code> command with the server hostname and TCP port number as arguments:</p>
<div class="highlight"><pre><span></span><code>$ telnet mail.example.com 25
Trying 127.0.0.1...
Connected to mail.example.com
Escape character is '^]'.
220 mail.example.com ESMTP Postfix (Ubuntu)
</code></pre></div>
<p>Although as we noted above that regular newlines will <em>probably</em> work, the proper and correct thing to do is to switch <code>telnet</code>'s line endings to CR+LF. To do that, type <code>^]</code> followed by Enter and then:</p>
<div class="highlight"><pre><span></span><code>^]
telnet> toggle crlf
Will send carriage returns as telnet <CR><LF>.
</code></pre></div>
<p>From this point on, continue sending your message:</p>
<div class="highlight"><pre><span></span><code>HELO blog.bityard.net
250 OK
MAIL FROM:<alice@bityard.net>
250 OK - mail from alice@bityard.net
RCPT TO:<bob@example.com>
250 OK
data
354 End data with <CR><LF>.<CR><LF>
From: alice@bityard.net
To: bob@example.com
Subject: Why are fish so easy to weigh?
Because they have their own scales.
.
250 OK
quit
221 BYE
</code></pre></div>
<h2>The Netcat Way</h2>
<p>If your Unix machine is far too modern and hip to have an old fossil like <code>telnet</code> lying around, then perhaps it has <code>netcat</code>? If so, the process is largely similar, except you start the program with the <code>-C</code> flag to tell <code>netcat</code> to use CR+LF line endings:</p>
<div class="highlight"><pre><span></span><code>$ nc -C mail.example.com 25
</code></pre></div>
<p>From here, your port is open and you can just bash out the conversation on your keyboard.</p>
<p>Since <code>netcat</code> is, after all, designed to be stuffed into pipelines, you could conceivably put your half of the conversation into a file and just blast it at the server, right? Well, you could try, and you will sometimes even get away with it. Remember what I said above: SMTP is a conversation. If you start barking multiple commands at the server without waiting for a response in between, it will complain because that isn't a <em>conversation</em>.[^pipelining]</p>
<p>There is a cheap hacky work-around to this, though: you can tell <code>netcat</code> to wait a certain amount of time between sending lines, in order to give the server time to respond to commands. This will often work, but you'll want to pay attention and adjust the interval when working with particularly lethargic mail servers. (And I certainly do NOT recommend doing this for any kind of permanent solution. It is very brittle.)</p>
<p>Here is what a file called <code>test.smtp</code> might look like (be sure to use CR+LF line endings in the file!):</p>
<div class="highlight"><pre><span></span><code>HELO blog.bityard.net
MAIL FROM:<alice@bityard.net>
RCPT TO:<bob@example.com>
DATA
From: alice@bityard.net
To: bob@example.com
Subject: Originally, I didn't like having a beard.
But then it grew on me.
.
QUIT
</code></pre></div>
<p>And this is how you would send it.</p>
<div class="highlight"><pre><span></span><code>nc -C -i 1 mail.example.com 25 < test.smtp
</code></pre></div>
<h2>The Python Way</h2>
<p>One of the better ways to send a message from a host that has <a href="https://python.org">Python</a> installed is with a short script. This is made possible by virtue of Python's built-in <a href="https://docs.python.org/3/library/smtplib.html">smptlib</a> module. The nice thing about this is that it's highly flexible and doesn't require any other local mail server or tools.</p>
<div class="highlight"><pre><span></span><code><span class="ch">#!/usr/bin/env python3</span>
<span class="kn">import</span> <span class="nn">smtplib</span>
<span class="kn">from</span> <span class="nn">email.message</span> <span class="kn">import</span> <span class="n">EmailMessage</span>
<span class="n">msg</span> <span class="o">=</span> <span class="n">EmailMessage</span><span class="p">()</span>
<span class="n">msg</span><span class="p">[</span><span class="s1">'From'</span><span class="p">]</span> <span class="o">=</span> <span class="s1">'alice@bityard.net'</span>
<span class="n">msg</span><span class="p">[</span><span class="s1">'To'</span><span class="p">]</span> <span class="o">=</span> <span class="s1">'bob@example.com'</span>
<span class="n">msg</span><span class="p">[</span><span class="s1">'Subject'</span><span class="p">]</span> <span class="o">=</span> <span class="s1">'Every time you swallow some food coloring...'</span>
<span class="n">msg</span><span class="o">.</span><span class="n">set_content</span><span class="p">(</span><span class="s1">'...you dye a little inside.'</span><span class="p">)</span>
<span class="n">smtp</span> <span class="o">=</span> <span class="n">smtplib</span><span class="o">.</span><span class="n">SMTP</span><span class="p">(</span><span class="s1">'mail.example.com'</span><span class="p">,</span> <span class="mi">25</span><span class="p">)</span>
<span class="n">smtp</span><span class="o">.</span><span class="n">set_debuglevel</span><span class="p">(</span><span class="mi">2</span><span class="p">)</span>
<span class="n">smtp</span><span class="o">.</span><span class="n">send_message</span><span class="p">(</span><span class="n">msg</span><span class="p">)</span>
<span class="n">smtp</span><span class="o">.</span><span class="n">quit</span><span class="p">()</span>
</code></pre></div>
<h2>The Sendmail Way</h2>
<p>Unix greybeards will remember Sendmail, possibly as a motivation for taking up a burning interest in the hobby of drinking to excess. As a mail server, it has mostly been supplanted by more modern and sensible options. But several parts of its legacy live on and one of those is the <code>sendmail</code> client for sending mail from the command-line.</p>
<p>The <code>sendmail</code> command allows one to to write (or of course generate) a message in a standard format and then send it on its way to a mail server. If the host you're logged into has a mail server running on it (such as Sendmail, Postfix, Exim, etc), then the <code>sendmail</code> command is likely available. There are also stand-alone mail transfer agents that <em>only</em> accept messages and forward them along to some "real" mail server. (The one that I usually reach for is <a href="https://marlam.de/msmtp/">msmtp</a>.)</p>
<p>If a <code>sendmail</code> command exists on the host, you can use it to send messages which were written as text files. Let's assemble the following message as <code>my_message.eml</code> in the text editor of your choice:</p>
<div class="highlight"><pre><span></span><code>From: alice@bityard.net
To: bob@example.com
Subject: I recently developed an irrational fear of elevators
Since then, I have been taking steps to avoid them.
</code></pre></div>
<p>Notice that the top of the message has <em>headers</em> followed by a blank line, followed by the message. Theoretically, only the <code>To</code> header is required, but it depends on which <code>sendmail</code> variant you have installed. It's a good idea to include all three in any case, it will possibly make your life less interesting-but-in-a-bad-way.</p>
<p>You can send it with:</p>
<div class="highlight"><pre><span></span><code>sendmail -vt < my_message.eml
</code></pre></div>
<p>The <code>-v</code> flag tells the command to report what it's doing (useful when troubleshooting) and the <code>-t</code> flag tells it to read the recipient(s) from the headers in the message itself. Your <code>sendmail</code> implementation may have other options to investigate. Feel free to peruse them with <code>man sendmail</code>.</p>
<h2>The Swaks Way</h2>
<p>On systems with <a href="https://www.perl.org">Perl</a> (or a package manager that can install one), <a href="https://github.com/jetmore/swaks">Swaks</a> may be an option.</p>
<p>Swaks describes itself as a "Swiss Army Knife for SMTP". The nice thing about Swaks is that it lets you test and verify aspects of your SMTP configuration that would otherwise take a lot of setup or custom code. You can use it to test encryption (TLS, STARTTLS), authentication, SMTP protocol variants, sockets, proxies, and a whole bunch more.</p>
<p>See <a href="https://github.com/jetmore/swaks/blob/v20201014.0/doc/base.pod">the docs</a> for full details, but a simple test message can be sent with:</p>
<div class="highlight"><pre><span></span><code>swaks --to alice@example.com --server mail.example.com
</code></pre></div>
<h2>The Bash Way</h2>
<p>I present this way last because of the ways presented to far, this one is the most ill-advised. It's here mainly for completeness and and probably should not be used for anything serious except by those afflicted with chronic self-loathing. In any case, you do you.</p>
<p><a href="https://tiswww.case.edu/php/chet/bash/bashtop.html">Bash</a> has this <a href="https://tiswww.case.edu/php/chet/bash/bashref.html#Redirections">one weird trick</a> where you can open a TCP (or UDP) port to another host and read and write to it with a file descriptor. This means you can (in theory) write a Bash script to communicate with any Internet service. Now, Bash is good at a great many things, but writing a robust SMTP client would be quite a challenge. Nevertheless, if we sacrifice our sanity a little and don't mind some repetition, we can get away with the bare minimum needed to send a message.</p>
<p>The following script was lightly modified for clarity but was based on <a href="https://stackoverflow.com/a/10001357">this answer</a> from Stack Overflow.</p>
<div class="highlight"><pre><span></span><code><span class="ch">#!/usr/bin/env bash</span>
<span class="nb">readonly</span><span class="w"> </span><span class="nv">smtp_host</span><span class="o">=</span>mail.example.com
<span class="nb">readonly</span><span class="w"> </span><span class="nv">smtp_port</span><span class="o">=</span><span class="m">25</span>
<span class="nb">readonly</span><span class="w"> </span><span class="nv">msg_from</span><span class="o">=</span>alice@bityard.net
<span class="nb">readonly</span><span class="w"> </span><span class="nv">msg_to</span><span class="o">=</span>bob@example.com
<span class="nb">readonly</span><span class="w"> </span><span class="nv">msg_subject</span><span class="o">=</span><span class="s1">'Why did the scarecrow win an award?'</span>
<span class="nb">readonly</span><span class="w"> </span><span class="nv">msg_body</span><span class="o">=</span><span class="s1">'Because he was outstanding in his field.'</span>
<span class="c1"># send a line ending in a carriage return followed by an implicit line feed</span>
<span class="c1"># (`echo` prints a line feed at the end of each line automatically)</span>
send<span class="o">()</span><span class="w"> </span><span class="o">{</span>
<span class="w"> </span><span class="nb">echo</span><span class="w"> </span>-e<span class="w"> </span><span class="s2">"</span><span class="nv">$@</span><span class="s2">\r"</span><span class="w"> </span>><span class="p">&</span><span class="m">3</span>
<span class="o">}</span>
<span class="c1"># check the status code returned by the server</span>
<span class="c1"># if it's not what we expect, bail out</span>
check_status<span class="w"> </span><span class="o">()</span><span class="w"> </span><span class="o">{</span>
<span class="w"> </span><span class="nv">expect</span><span class="o">=</span><span class="m">250</span>
<span class="w"> </span><span class="k">if</span><span class="w"> </span><span class="o">[</span><span class="w"> </span><span class="nv">$#</span><span class="w"> </span>-eq<span class="w"> </span><span class="m">3</span><span class="w"> </span><span class="o">]</span><span class="w"> </span><span class="p">;</span><span class="w"> </span><span class="k">then</span>
<span class="w"> </span><span class="nv">expect</span><span class="o">=</span><span class="s2">"</span><span class="nv">$3</span><span class="s2">"</span>
<span class="w"> </span><span class="k">fi</span>
<span class="w"> </span><span class="k">if</span><span class="w"> </span><span class="o">[</span><span class="w"> </span><span class="nv">$1</span><span class="w"> </span>-ne<span class="w"> </span><span class="nv">$expect</span><span class="w"> </span><span class="o">]</span><span class="w"> </span><span class="p">;</span><span class="w"> </span><span class="k">then</span>
<span class="w"> </span><span class="nb">echo</span><span class="w"> </span><span class="s2">"Error: </span><span class="nv">$2</span><span class="s2">"</span><span class="w"> </span>><span class="p">&</span><span class="m">2</span>
<span class="w"> </span><span class="nb">exit</span>
<span class="w"> </span><span class="k">fi</span>
<span class="o">}</span>
<span class="c1"># open a TCP connection to the mail server</span>
<span class="nb">exec</span><span class="w"> </span><span class="m">3</span><>/dev/tcp/<span class="nv">$smtp_host</span>/<span class="nv">$smtp_port</span>
<span class="nb">read</span><span class="w"> </span>-u<span class="w"> </span><span class="m">3</span><span class="w"> </span>status<span class="w"> </span>text
check_status<span class="w"> </span><span class="s2">"</span><span class="nv">$status</span><span class="s2">"</span><span class="w"> </span><span class="s2">"</span><span class="nv">$text</span><span class="s2">"</span><span class="w"> </span><span class="m">220</span>
<span class="c1"># greet the server</span>
send<span class="w"> </span><span class="s2">"HELO </span><span class="k">$(</span>hostname<span class="w"> </span>-f<span class="k">)</span><span class="s2">"</span>
<span class="nb">read</span><span class="w"> </span>-u<span class="w"> </span><span class="m">3</span><span class="w"> </span>status<span class="w"> </span>text
check_status<span class="w"> </span><span class="s2">"</span><span class="nv">$status</span><span class="s2">"</span><span class="w"> </span><span class="s2">"</span><span class="nv">$text</span><span class="s2">"</span>
<span class="c1"># send the envelope</span>
send<span class="w"> </span><span class="s2">"MAIL FROM: </span><span class="nv">$msg_from</span><span class="s2">"</span>
<span class="nb">read</span><span class="w"> </span>-u<span class="w"> </span><span class="m">3</span><span class="w"> </span>status<span class="w"> </span>text
check_status<span class="w"> </span><span class="s2">"</span><span class="nv">$status</span><span class="s2">"</span><span class="w"> </span><span class="s2">"</span><span class="nv">$text</span><span class="s2">"</span>
send<span class="w"> </span><span class="s2">"RCPT TO: </span><span class="nv">$msg_to</span><span class="s2">"</span>
<span class="nb">read</span><span class="w"> </span>-u<span class="w"> </span><span class="m">3</span><span class="w"> </span>status<span class="w"> </span>text
check_status<span class="w"> </span><span class="s2">"</span><span class="nv">$status</span><span class="s2">"</span><span class="w"> </span><span class="s2">"</span><span class="nv">$text</span><span class="s2">"</span>
<span class="c1"># send the message</span>
send<span class="w"> </span><span class="s2">"DATA"</span>
<span class="nb">read</span><span class="w"> </span>-u<span class="w"> </span><span class="m">3</span><span class="w"> </span>status<span class="w"> </span>text
check_status<span class="w"> </span><span class="s2">"</span><span class="nv">$status</span><span class="s2">"</span><span class="w"> </span><span class="s2">"</span><span class="nv">$text</span><span class="s2">"</span><span class="w"> </span><span class="m">354</span>
send<span class="w"> </span><span class="s2">"From: </span><span class="nv">$msg_from</span><span class="s2">"</span>
send<span class="w"> </span><span class="s2">"To: </span><span class="nv">$msg_to</span><span class="s2">"</span>
send<span class="w"> </span><span class="s2">"Subject: </span><span class="nv">$msg_subject</span><span class="s2">"</span>
send
send<span class="w"> </span><span class="s2">"</span><span class="nv">$msg_body</span><span class="s2">"</span>
send
send<span class="w"> </span><span class="s2">"."</span>
<span class="nb">read</span><span class="w"> </span>-u<span class="w"> </span><span class="m">3</span><span class="w"> </span>status<span class="w"> </span>text
check_status<span class="w"> </span><span class="s2">"</span><span class="nv">$status</span><span class="s2">"</span><span class="w"> </span><span class="s2">"</span><span class="nv">$text</span><span class="s2">"</span>
</code></pre></div>
<h2>In Conclusion</h2>
<p>I am terrible at writing conclusions. This is the end of the article, I hope you enjoyed it.</p>
<div class="footnote">
<hr>
<ol>
<li id="fn:hostname">
<p>Real Mail Servers out there in Cyberspace <em>may</em> try to verify that you are who you say you are with a DNS lookup or two and might close the connection if they think you are lying. But on an internal mail relay or somesuch, you can often get away with some degree of subterfuge. <a class="footnote-backref" href="#fnref:hostname" title="Jump back to footnote 1 in the text">↩</a></p>
</li>
<li id="fn:piplining">
<p>The more experienced readers among us will note that all modern mail servers these days support <a href="https://datatracker.ietf.org/doc/html/rfc2920">pipelining</a>, but pipelining only helps you blast <em>some</em> commands at a server in rapid-fire fashion, not all. <a class="footnote-backref" href="#fnref:piplining" title="Jump back to footnote 2 in the text">↩</a></p>
</li>
</ol>
</div>DIY Vinyl Cut Motorcycle Emblem2022-06-26T00:00:00-04:002023-12-10T00:49:37-05:00Charlestag:None,2022-06-26:articles/2022/June/diy-vinyl-cut-motorcycle-emblem.html<p>So I have this motorcycle.</p>
<figure>
<img src="images/fork-cover/motorbike-640.jpg">
</figure>
<p>It's a lot of fun.</p>
<p>I could talk at length about the general awesomeness of the late 70's to early 80's Suzuki GS muscle bikes. But! We only have so much time until the heat death of the universe. So to keep it brief, I …</p><p>So I have this motorcycle.</p>
<figure>
<img src="images/fork-cover/motorbike-640.jpg">
</figure>
<p>It's a lot of fun.</p>
<p>I could talk at length about the general awesomeness of the late 70's to early 80's Suzuki GS muscle bikes. But! We only have so much time until the heat death of the universe. So to keep it brief, I picked up this 1979 Suzuki GS850G a few years back after seeing it on a local classified site. It was in fantastic shape, had a fair number of useful upgrades over the stock '79 model, and came with a boat-load of extra parts. Best of all, the price was right. I brought it home with me and after very little maintenance, it's been a solid runner that I look forward to riding <em>every</em> stinking chance I get.</p>
<h1>The Missing Piece</h1>
<p>After I bought it, I was so busy ogling the rest of the bike that it took a few days before I even noticed it was missing the fork cover.</p>
<p>Just below the headlight there's supposed to be a chrome bit that covers up the front brake line tee. It offers no functional value whatsoever to the motorcycle--it's a purely cosmetic touch--and it's not something that anyone except fans of the bike would even know was missing. I <em>shouldn't</em> give a crap about this useless piece of shiny. And yet...</p>
<p>The first thing one does in this situation is head onto eBay and see if by some Christmas Miracle there's a decent one listed for sale at an affordable price. I've been lucky before, but not this time. For this kind of part, most of the time, the offerings can be divided into four categories:</p>
<ol>
<li>Rusty junk ($)</li>
<li>Not rusty, but dented or missing some bits ($$)</li>
<li>Good enough, clean, shiny parts ($$$)</li>
<li>New old stock (NOS) ($$$$)</li>
</ol>
<p>I'm not going to spend a lot of analog currency on a <a href="https://www.dictionary.com/e/pop-culture/farkle/">farkle</a>. I resolved to stay in the $$ category and purchased a shiny-looking cover with a missing emblem. Which is another way of saying I solved half the problem, throwing the other half over the wall at poor, defenseless Future Me to deal with. I'm a jerk like that.</p>
<figure>
<img src="images/fork-cover/fork-cover-640.jpg">
<figcaption>Shiny, mostly.</figcaption>
</figure>
<blockquote>
<p>Sidebar: This part isn't 100% completely correct for the bike. Out of the factory, the fork cover for this bike had a more boxy--dare I say <em>chonkier</em>--look to it. But those are even harder to find and I like the slightly more streamlined look of this one better. But be forewarned, it's often a gamble when buying parts for a different model year than the vehicle you currently have, ask me how I know.</p>
</blockquote>
<figure>
<img src="images/fork-cover/fork-cover-diagram.png">
<figcaption>The real McCoy.</figcaption>
</figure>
<h1>The Solution</h1>
<p>Eventually, it came time to figure out what to do about the missing emblem. After a lot of research and dead ends, at some point I had a light bulb moment and remembered that my wife owned a vinyl cutter. She's always cutting out vinyl stickers to put on mugs, picture frames, small furry animals and so forth and I didn't see why it couldn't work here. It turns out they make sheets of adhesive vinyl that is specifically intended for use on vehicles and outdoor signage. If that stuff holds up to cars and such, it should work well enough to be a logo for a bike that's <em>almost</em> a garage queen and never sees any harsh weather beyond a bit of sun. If it turned out the way I was hoping it would, nobody would notice that the fork cover had vinyl instead of embossed metal until they stuck their face right up to it. (At which point I would ask them why they are sticking their face right up to my bike.)</p>
<p>From the factory, this cover would have had a tastefully understated stylized "S" in the middle, like this picture I stole off eBay:</p>
<figure>
<img src="images/fork-cover/rustay-640.jpg">
<figcaption>This one's been hanging out at the beach or something.</figcaption>
</figure>
<p>However, a few Suzuki bikes plastered the brand's name across nearly the entirety of the emblem which is far bolder and maybe a little snazzier:</p>
<figure>
<img src="images/fork-cover/suzuki-full-640.jpg">
<figcaption>At least you'd never have to wonder who made your bike again.</figcaption>
</figure>
<p>I hemmed and hawed over this for a good long while couldn't make up my mind which version I wanted, so I ended up drawing both:</p>
<figure>
<a href="images/fork-cover/both-logos.png">
<img src="images/fork-cover/both-logos-640.png">
</a>
<figcaption>Decisions, decisions...</figcaption>
</figure>
<p>To design this, I got the measurements of the decal depression on the fork cover and did some trial-and-error with <a href="https://inkscape.org/">Inkscape</a>, a laserjet printer, some scissors, and a steady hand mostly unspoilt by a small glass of Jameson. I then grabbed an SVG copy of the Suzuki logo with its stylized "S" and blocky letters right off <a href="https://commons.wikimedia.org/wiki/File:Suzuki_logo_2.svg">Wikimedia Commons</a>. Thank you, Wikipedia, and thank you Suzuki for not going through 37 rebranding iterations over the last four decades.</p>
<p>The real emblems have a slightly more rectangular look to them than mine did because I was following the contours of the depression in the fork cover. However, I like the way these look and if anything, they follow the design language of the period. (Or at least, that's what I think a wanna-be hipster artist would say.)</p>
<h1>The Derustification</h1>
<p>The outside of the cover is shiny and pristine. However, the chrome on the inside of the cover was very thin; the plating has all been rusted through. Which is par for the course on these, for better or worse. The rust is just on the surface, however, and will come off easily.</p>
<figure>
<img src="images/fork-cover/inside-640.jpg">
<figcaption>Some spiders lived here once.</figcaption>
</figure>
<p>Once I had a game plan, it was time to put down the whiskey and start actually doing stuff. The first order of business was to de-rust the underside of the cover. Normally my go-to for rust removal is vinegar and plenty of patience. But vinegar is an acid and I wasn't 100% sure it wouldn't tarnish or eat into the chrome. I had been wanting to try <a href="https://www.evapo-rust.com/">Evapo-rust</a> after hearing about it on forums and from various YouTubers, some of whom were paid to say nice things about it. It claims to be "super safe" and several of their example images show it cleaning up chrome, so I decided to give it a shot. There was just one caveat...</p>
<figure>
<img src="images/fork-cover/erust-temp.png">
</figure>
<p>It was towards the end of winter when I did this and I didn't want to use the Evapo-rust inside my house. The label on the bottle makes it sound like it's perfectly fine to stir a spoonful into your morning tea but <a href="https://images.salsify.com/image/upload/s--ehrZNHce--/dkbajupk7iagslw4ne4m">the SDS</a> says it can cause skin and eye irritation. So in the garage it had to be. Only problem is that the garage wasn't warm enough...</p>
<figure>
<img src="images/fork-cover/garage-temp.png">
</figure>
<p>And from what I was reading online, several people said that they weren't having any luck with Evapo-rust under 65 degrees, which would be even more of a challenge.</p>
<p>My solution to this is a makeshift version of an EZ-Bake oven. Or hot-plate, anyway. I have these LED flood lights I sometimes use as work lights. They throw off 100W of heat once they get up to temperature. This seemed like enough to heat liquid up to at least room temperature. I did an experiment with just water and found that after a few hours, the temp of the water rose to about 84 degrees and stayed put. Perfect! All that was left to do was give the real thing a soak.</p>
<figure>
<img src="images/fork-cover/bathtime-640.jpg">
<figcaption>Bath time!</figcaption>
</figure>
<p>This worked surprisingly well. I let the Evapo-rust do its thing overnight and came back the next day to rinse it off and found nearly all of the rust had been dissolved. I could have let it go for a bit longer and got the last little bit, but I was getting impatient. The little bit of rust that was left was a truly minuscule amount that paint will cover up and encapsulate forever.</p>
<p>I masked off all the chrome on the outside and shot the inside with an etching metal primer, followed by a slightly-metallic silver automotive paint, both of which I already had lying around.</p>
<figure>
<img src="images/fork-cover/inside-painted-640.jpg">
<figcaption>Nobody will ever see this. :/</figcaption>
</figure>
<h1>The Vinyl</h1>
<p>Now for the fun bit. According to The Internet, the good stuff is genuine <a href="https://www.orafol.com/en/americas/products/oracal-651-intermediate-cal">Oracal 651</a>. Supposed to be suitable for just about anything, even outdoor and automotive applications. So I ordered some in matte black and metallic silver, which as we all know is code for "high-luster gray."</p>
<p>My wife has a Silhouette vinyl cutter which does some pretty amazing stuff. Unfortunately the company is run by shitheads because even after you buy the machine, you can't simply import an SVG into the program and go to town. Heavens no. You have to <em>buy an upgrade to the software</em> before it will even let you work with vector graphics. Which is insane because it's literally just a 2D plotter a knife instead of a pen.</p>
<figure>
<a href="images/fork-cover/marketing-wank.png">
<img src="images/fork-cover/marketing-wank-640.png">
</a>
<figcaption>You should click this, the Silhouette marketing team demonstrate some refreshing candor!</figcaption>
</figure>
<p>So, as a result of the aforementioned buffoonery, I wanted to see if I drive this sucker with Linux. Some half-hearted googling led me to the <a href="https://www.codelv.com/projects/inkcut/">inkcut</a> project which is sort of an all-in-one program for all kinds of 2D vector devices including plotters, engravers, and CNC. It seems to be written largely in Python (which is good!) but also hasn't seen a lot of upkeep and maintenance in recent years. I managed to get it installed after some difficulty but even though it listed the Silhouette Cameo as a supported device, there's no option to select that in the UI. Even after reading some of the source code, I couldn't figure out how to easily add it.</p>
<p>Almost by accident, I then stumbled upon <a href="https://github.com/fablabnbg/inkscape-silhouette">inkscape-silhouette</a> which is just a plugin for Inkscape. Apparently it works on Windows and Mac OS as well. After you install it, you literally just run Inkscape, load up your drawing, launch the plugin, and send the drawing to the cutter. Very easy, very nice. The only real caveat I found was that everything in your drawing has to be a Path, but that's trivial to do. Big shout out to the folks at <a href="http://www.fablab-nuernberg.de/">Fab Lab Region Nurenberg</a> for their work on this extension.</p>
<p>If by some chance you want to cut out your own Suzuki logos, there is a link to an SVG file containing the background and logos at the end of this post.</p>
<p>Since I had a ton of vinyl (it's pretty affordable in bulk!), I decided to just cut out a big batch of these all at once, in case I messed a few up, and to give a few away to friends. I started with the black backgrounds and then the silver S logos and Suzuki logos.</p>
<figure>
<img src="images/fork-cover/backgrounds-640.jpg">
<figcaption>A bunch of backgrounds.</figcaption>
</figure>
<figure>
<img src="images/fork-cover/foregrounds-640.jpg">
<figcaption>A bunch of foregrounds.</figcaption>
</figure>
<figure>
<img src="images/fork-cover/completed-emblems-640.jpg">
<figcaption>All together now.</figcaption>
</figure>
<p>The vinyl we're using has a fairly strong adhesive on the back. The process of layering it and applying it to the object looks something like this:</p>
<ol>
<li>Remove the waste material, with the help of a pointy object if necessary.</li>
<li>Apply a rectangle of transparent contact paper (which is not paper, but plastic) to the top-most design.</li>
<li>Remove the backing from the top layer.</li>
<li>Pain-stakingly align the top layer with the bottom layer and apply.</li>
<li>Peel off the backing from the bottom layer.</li>
<li>Pain-stakingly align the whole thing to the object and apply.</li>
</ol>
<p>After that, you are left with a fork cover that looks like this:</p>
<figure>
<img src="images/fork-cover/cover-with-emblem-640.jpg">
<figcaption>With roll of tape for scale.</figcaption>
</figure>
<p>You can see some bubbles here. If they are near the edge, they can be worked out by gently but firmly pushing them toward the edge. In the middle, you can poke them with a pin, rub them a bit, and they almost totally disappear. If they don't, it's no big deal. This is a fork cover for an old motorcycle, not an entry in a concourse show.</p>
<h1>The Shining</h1>
<p>Although the vinyl I chose is supposed to do well enough on its own, I decided I wanted to do something to seal the edges of the vinyl against the elements and inevitable bug guts. So I gingerly opened up my cabinet full of half-used cans of spray paint and basically just used the first thing that fell out onto the floor.</p>
<p>As a wise man once said, "I may be stupid but I ain't dumb." I made sure this was actually halfway cromulent for this application and then sprayed it over some scrap vinyl on top of some scrap metal. Looks-wise, it turned out even better than I was hoping and seemed to be durable enough.</p>
<figure>
<img src="images/fork-cover/test-subject-640.jpg">
<figcaption>Our test subject.</figcaption>
</figure>
<p>I again masked off the chrome and then went to town with the laquer. This is a look at the end result, before any polishing:</p>
<figure>
<img src="images/fork-cover/final-product-640.jpg">
<figcaption>Our final product.</figcaption>
</figure>
<p>And here it is installed on the bike:</p>
<figure>
<img src="images/fork-cover/natural-habitat-640.jpg">
<figcaption>I'd say it looks like it belongs there.</figcaption>
</figure>
<h1>The Conclusion</h1>
<p>I'm embarrassed to say it didn't even dawn on me until late in the game that instead of a Suzuki logo, I could have drawn and cut any dang thing I wanted to, and stuck it on the fork cover. Like a tattoo for my bike. A fictional logo, an inspiration quote, perhaps even a dank meme. Oh well, maybe next time.</p>
<h1>The Resources</h1>
<ul>
<li><a href="https://commons.wikimedia.org/wiki/File:Suzuki_logo_2.svg">Suzuki Logo on Wikimedia Commons</a> and <a href="images/fork-cover/Suzuki_logo_2.svg">Mirror</a></li>
<li><a href="images/fork-cover/logos_cuttable.svg">Background and logos here</a>.</li>
<li><a href="https://github.com/fablabnbg/inkscape-silhouette">Inkscape Silhouette Plugin</a></li>
</ul>
<p>Once you have the plugin installed, all you should have to do is load the SVG above into Inkscape, drag them around however you want, clone them, etc, and send them to the cutter. If you make any modifications, remember that the plugin wants everything to be a path.</p>
<p>Finally, a plug for one of my favorite places on the Internet: If you like classic muscle bikes, you could do worse than to head over to <a href="https://www.thegsresources.com/_forum/">The GS Resources Forum</a> which is populated by a nice bunch of folks who love these old bikes almost as much as helping new owners get them running again.</p>Rabbit Holes: The Secret to Technical Expertise2019-08-24T00:00:00-04:002023-12-10T00:49:37-05:00Charlestag:None,2019-08-24:articles/2019/August/rabbit-holes-the-secret-to-technical-expertise.html<figure>
<img src="images/rabbit-holes/image1.png">
<figcaption>
(Alternate Title: How to Shut Up the Ubuntu MOTD, the Long Way)
</figcaption>
</figure>
<p>Sometimes, the simplest questions take you on exciting journies. This was, in fact, the most powerful and motivating force that got me into doing computery things from a very young age. I would ask a question, how do …</p><figure>
<img src="images/rabbit-holes/image1.png">
<figcaption>
(Alternate Title: How to Shut Up the Ubuntu MOTD, the Long Way)
</figcaption>
</figure>
<p>Sometimes, the simplest questions take you on exciting journies. This was, in fact, the most powerful and motivating force that got me into doing computery things from a very young age. I would ask a question, how do I X? And after some poking around I discover that I can't do X without learning about Y and Some Authoritative Resource says you <em>definitely</em> can't do Y without also knowing the arcane black magic of Z. And so on and so forth until I get myself so buried in tangents that at a certain point, I have no choice but to stop and come up for air. Or a potty break and snack.</p>
<p>In the glamorous tech sector, we call these things rabbit holes. Unless you got into tech solely for the money (you monster), it's stuff like this that we nerds live for. It's how we got our start and crucially, it's how we continue to learn and hone our skillset.</p>
<p>But what, you ask, does a rabbit hole look like? And anyway, don't rabbits live in dens or burrows? First of all, nobody asked you to critique the metaphor. Second, I'll show you. This isn't the deepest or most complex rabbit hole that I've stumbled down but it is recent and that counts for something when I'm itching to write something. Please feel free to follow along on your own instance of Ubuntu 18.04 if you have one handy. When you log into such a host, you are greeted with 27 lines of this here nonsense:</p>
<div class="highlight"><pre><span></span><code>[jayne:~]$ ssh ubuntu@127.0.0.1
Welcome to Ubuntu 18.04.2 LTS (GNU/Linux 4.15.0-1044-aws x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
System information as of Sat Aug 17 01:05:15 UTC 2019
System load: 0.0 Processes: 87
Usage of /: 13.7% of 7.69GB Users logged in: 0
Memory usage: 14% IP address for eth0: 127.0.0.1
Swap usage: 0%
* Keen to learn Istio? It's included in the single-package MicroK8s.
https://snapcraft.io/microk8s
* Canonical Livepatch is available for installation.
- Reduce system reboots and improve kernel security. Activate at:
https://ubuntu.com/livepatch
0 packages can be updated.
0 updates are security updates.
Last login: Fri Aug 16 23:40:01 2019 from 127.0.0.1
</code></pre></div>
<p>The first time you log into an Ubuntu host, all of this is very flashy and impressive. You think, wow, this looks like Serious Business. But right around the 427th time, it's just noise. Then at some inconvenient time in the future--when something on that host breaks badly and you have to endure that crap before you get to your shell prompt--you really hope that one day you can meet the guy who programmed it on the outside chance you might be able to covertly spike his coffee with a large dose of laxative.</p>
<p>In these 27 lines (which is three longer than the default terminal height, mind you), we have:</p>
<ul>
<li>a greeting, showing us the OS, version, kernel version, and CPU architecture</li>
<li>links to documentation</li>
<li>some arbitrary technical information about the host</li>
<li>an ad for some Canonical product</li>
<li>another ad for some Canonical product</li>
<li>package update information</li>
<li>when this account previously logged into this host</li>
</ul>
<p>Somebody somewhere obviously thought that every piece here would be vital information to somebody else. But let's see about how we can pare this down to something managable. Or in the worst case, wipe it out altogether.</p>
<p>I happen to know, from my decades of previous BSD/Linux rabbit holes, that messages which are printed after you log in usually come from the file <code>/etc/motd</code>. "motd" by the way, stands for Message of the Day. In the Olden Days it was a way for administrators to tell users about important things about the system or communicate news, like, "Hey everyone, the print server is down again. We think Phil broke it." On Debian and Ubuntu, most every configuration file has a man page so let's check out that lead with <code>man motd</code>:</p>
<div class="highlight"><pre><span></span><code>DESCRIPTION
The contents of /etc/motd are displayed by pam_motd(8) after a success‐
ful login but just before it executes the login shell.
The abbreviation "motd" stands for "message of the day", and this file
has been traditionally used for exactly that (it requires much less
disk space than mail to all users).
On Debian GNU/Linux, dynamic content configured at /etc/pam.d/login is
also displayed by pam_exec.
FILES
/etc/motd
/etc/pam.d/login
</code></pre></div>
<p>Great, so if the man page is right, all that crap should be festering in <code>/etc/motd</code>, let's take a look:</p>
<div class="highlight"><pre><span></span><code>ubuntu@ip-127-0-0-1:~$ cat /etc/motd
cat: /etc/motd: No such file or directory
</code></pre></div>
<p>Well, bugger. That certainly didn't pan out. Moreover, we've learned an important lesson: man pages can lie. Or at the very least, often don't tell the whole truth. Here is where we can try an experiment. What happens if we write something to <code>/etc/motd</code>? Will that replace the default Ubuntu MOTD? Could we be so lucky? Let's make ourselves root and see:</p>
<div class="highlight"><pre><span></span><code>echo "Goodbye, Cruel World" > /etc/motd
</code></pre></div>
<p>And upon logging in again:</p>
<div class="highlight"><pre><span></span><code>[jayne:~]$ ssh ubuntu@127.0.0.1
Welcome to Ubuntu 18.04.2 LTS (GNU/Linux 4.15.0-1044-aws x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
System information as of Sat Aug 17 01:09:15 UTC 2019
System load: 0.0 Processes: 87
Usage of /: 13.7% of 7.69GB Users logged in: 0
Memory usage: 14% IP address for eth0: 127.0.0.1
Swap usage: 0%
* Keen to learn Istio? It's included in the single-package MicroK8s.
https://snapcraft.io/microk8s
* Canonical Livepatch is available for installation.
- Reduce system reboots and improve kernel security. Activate at:
https://ubuntu.com/livepatch
0 packages can be updated.
0 updates are security updates.
Goodbye, Cruel World
Last login: Fri Aug 16 23:45:01 2019 from 127.0.0.1
</code></pre></div>
<p>Stellar. Now there are 28 lines of crap! But at least we know <code>/etc/motd</code> does <em>something</em>. <em>Sigh</em>. And we also know that <code>/etc/</code> is where a bunch of system configuration lives, so maybe there's something else in there that has to do with the MOTD. Let's use <code>grep</code>:</p>
<figure>
<img src="images/rabbit-holes/image2.png">
</figure>
<p>It turns out there's quite a lot! There are some promising leads here, but skimming over the list of possibilities, the last few lines catch my attention the most since those files are in a directory called <code>/etc/update-motd.d</code>. These are the files in it:</p>
<div class="highlight"><pre><span></span><code>root@ip-127-0-0-1:/etc/update-motd.d# ls -1 /etc/update-motd.d/
00-header
10-help-text
50-landscape-sysinfo
50-motd-news
80-esm
80-livepatch
90-updates-available
91-release-upgrade
95-hwe-eol
97-overlayroot
98-fsck-at-reboot
98-reboot-required
</code></pre></div>
<p>That's interesting: ignore a few of these and these files start to look a lot like our MOTD outline from earlier. Let's see what's in the first one. Omitting the comments and a bit of variable-setting code for brevity, we get:</p>
<div class="highlight"><pre><span></span><code>printf "Welcome to %s (%s %s %s)\n" "$DISTRIB_DESCRIPTION" "$(uname -o)" "$(uname -r)" "$(uname -m)"
</code></pre></div>
<p>And the next one, <code>10-help-text</code> contains mostly this:</p>
<div class="highlight"><pre><span></span><code>printf " * Documentation: https://help.ubuntu.com\n"
printf " * Management: https://landscape.canonical.com\n"
printf " * Support: https://ubuntu.com/advantage\n"
</code></pre></div>
<p>It's as close to a smoking gun as we're likely to see: These definitely appear to match the first few lines of the MOTD. Given that the file names contain an integer prefix, we can guess that <em>probably</em> all the scripts in this directory are executed sequentially and their output blurted out to us when we log in.</p>
<p>But by what? A normal, perfectly sane person would stop here and say, "Well now the rest is easy, you just delete or modify those scripts and Roberta's-yer-auntie!" And you would be right. But who said I was a normal, perfectly sane person? Certainly not Roberta. Let's look at disabling the most obnoxious thing in the MOTD, the Ubuntu news, announcements, and astrology section. I recall that this part changes from time to time so it must be phoning home to Canonical. As a refresher, here's what it says today:</p>
<div class="highlight"><pre><span></span><code> * Keen to learn Istio? It's included in the single-package MicroK8s.
https://snapcraft.io/microk8s
</code></pre></div>
<p>Today, I'm mostly keen to learn how the f to get rid of this message. By perusing a few of the files, I suspect that <code>50-motd-news</code> is likely our guy. Not least of all because there's a URL in it: https://motd.ubuntu.com And if we go to that URL:</p>
<div class="highlight"><pre><span></span><code>root@ip-127-0-0-1:~# curl https://motd.ubuntu.com
* Keen to learn Istio? It's included in the single-package MicroK8s.
https://snapcraft.io/microk8s
</code></pre></div>
<p>So the <code>/etc/update-motd.d/50-motd-news</code> script is what's printing ads into our young impressionable eyeballs. And again we dive deeper into the rabbit hole because we want to know <em>how</em>. I'll spare you the boring details of the full code listing but basically it's doing this:</p>
<ul>
<li>read some environment variables from <code>/etc/default/motd-news</code></li>
<li>exit immediately if <code>$ENABLED</code> is not <code>1</code></li>
<li>if the <code>--force</code> flag was <em>not</em> provided, print whatever is in <code>/var/cache/motd-news</code></li>
<li>if the <code>--force</code> <em>was</em> provided, go out and fetch the MOTD from https://motd.ubuntu.com</li>
</ul>
<p>This tells us that 1) there is a way to disable this part of the MOTD easily by editing <code>/etc/default/motd-news</code> and 2) the output of this script is fetched and then cached somewhere. If we run the script, we get the same output as when we fetched the contents of the URL a moment ago. And if we print the contents of <code>/var/cache/motd-news</code>, we get the same thing as well:</p>
<div class="highlight"><pre><span></span><code>root@ip-127-0-0-1:~# /etc/update-motd.d/50-motd-news
* Keen to learn Istio? It's included in the single-package MicroK8s.
https://snapcraft.io/microk8s
root@ip-127-0-0-1:~# cat /var/cache/motd-news
* Keen to learn Istio? It's included in the single-package MicroK8s.
https://snapcraft.io/microk8s
</code></pre></div>
<p>So this pretty much confirms our broad understanding of how this is all working. Pop the cork off the champagne, we figured it out! Woohoo!</p>
<p>But wait... what's that <code>--force</code> flag all about? What calls this script with it? (Not the login process, as far as we know.) And how does <code>/var/cache/motd-news</code> ever get updated with new content? And we still don't know what part of the login process is running the scripts in <code>/etc/update-motd.d</code> anyway? These are questions that we could certainly live without knowing but deep down we know that it will be a hollow, meaningless existence unless we find out. It's not champagne time yet, I'm afraid.</p>
<p>Taking another close look at the <code>/etc/update-motd.d/50-motd-news</code> file, this comment catches the eye:</p>
<div class="highlight"><pre><span></span><code># If we've made it here, we've been given the --force argument,
# probably from the systemd motd-news.service. Let's update...
</code></pre></div>
<p>"The systemd motd-news.service," you say? As in, something <em>else</em> besides the login process calls this script? Weirder things I have seen. Let's look for that service:</p>
<div class="highlight"><pre><span></span><code>root@ip-127-0-0-1:~# systemctl list-unit-files | grep motd
motd-news.service static
motd.service masked
motd-news.timer enabled
</code></pre></div>
<p>What we have here is a <code>motd-news</code> service and timer. There's also an <code>motd.service</code> but it's masked which means it's quite definitely not doing anything. I'm curious to know what that is for but the other two are what I'm interested in at the moment.</p>
<p>The Old Way of running jobs on a Unix system at specific intervals is through the cron daemon. You specify the interval as a sequence of fields and the job to run, and then <code>crond</code> takes care of the rest. It's a simple, elegant system and has been serving us well for decades. Systemd, which has good parts and bad parts and insane parts, has come up with a replacement for cron called timers. Long story short, timers are like cron jobs but with a lot more options for scheduling and management. Their configuration is also quite a bit more complex as a result. Another difference is that timers don't execute commands directly, instead you have to define a systemd <em>service</em> which is then triggered by a timer.</p>
<p>Anyway, on Ubuntu the unit files live in <code>/lib/systemd/system</code> so that's where we find the <code>motd-news.*</code> units. Let's look at the timer first.</p>
<div class="highlight"><pre><span></span><code>[Unit]
Description=Message of the Day
[Timer]
OnCalendar=00,12:00:00
RandomizedDelaySec=12h
Persistent=true
OnStartupSec=1min
[Install]
WantedBy=timers.target
</code></pre></div>
<p>The <code>Description</code> field confirms that <code>motd</code> doesn't actually stand for "Melon of the Deep" and the <code>OnCalender</code> and <code>RandomizedDelaySec</code> fields are what tells systemd to fire the timer once at some random time once within every 12-hour period. We don't see the service unit listed here because according to <a href="https://www.freedesktop.org/software/systemd/man/systemd.timer.html">the man page for timers</a>, a timer will by default trigger a service file of the same name. So let's crack open <code>motd-news.service</code>, then:</p>
<div class="highlight"><pre><span></span><code>[Unit]
Description=Message of the Day
After=network-online.target
Documentation=man:update-motd(8)
[Service]
Type=oneshot
ExecStart=/etc/update-motd.d/50-motd-news --force
</code></pre></div>
<p>Aha, there's the <code>--force</code> flag! We are also helpfully informed that the <code>/etc/update-motd.d/</code> directory has it's own man page, something that might have been helpful earlier if I would have had the presence of mind to check for it. <em>This</em> is what we could have used instead of the mostly-useless <code>motd</code> man page. An enterprising young hacker could do the world some good by submitting a patch to the Ubuntu <code>motd</code> man page containing a pointer to the <code>update-motd</code> man page.</p>
<p>At any rate, the man page describes a number of assumptions and best practices around writing and maintaining scripts in <code>/etc/update-motd.d</code>. Importantly, it also says, and I quote:</p>
<div class="highlight"><pre><span></span><code>Executable scripts in /etc/update-motd.d/* are executed by pam_motd(8)
as the root user at each login, and this information is concatenated in
/run/motd.dynamic.
</code></pre></div>
<p>But if we look at the PAM config files, they both have this same blurb:</p>
<div class="highlight"><pre><span></span><code># Prints the message of the day upon successful login.
# (Replaces the `MOTD_FILE' option in login.defs)
# This includes a dynamically generated part from /run/motd.dynamic
# and a static (admin-editable) part from /etc/motd.
session optional pam_motd.so motd=/run/motd.dynamic
session optional pam_motd.so noupdate
</code></pre></div>
<p>Cross-referencing this with the pam_motd man page, the <code>motd=/run/motd.dynamic</code> option simply tells <code>pam_motd</code> to print the contents of <code>/run/motd.dynamic</code>. And the <code>noupdate</code> option tells it <em>not</em> to run the scripts in <code>/etc/update-motd.d</code>. So how are the scripts in <code>/etc/update-motd.d</code> <em>actually</em> getting run on every login? One of these man pages is lying to us. Again.</p>
<p>As it sometimes the case, we will probably have to resort to looking at souce code to figure this one out. Debuan and Ubuntu make it extremely easy to get the source code for every package on the system. The first step is to enable all the source repositories in the <code>/etc/apt/sources.list</code> by uncommenting out the lines beginning with <code>deb-src</code> and then running <code>apt update</code>. You also need to install the <code>dpkg-dev</code> package to work with source packages. So, like, do that.</p>
<p>Next, as a regular user (not root) find out where <code>pam_motd.so</code> is, then figure out which package it belongs to, and then fetch the source code for that package, like so:</p>
<div class="highlight"><pre><span></span><code>ubuntu@ip-127-0-0-1:~$ find /lib -type f -name pam_motd.so 2>/dev/null
/lib/x86_64-linux-gnu/security/pam_motd.so
ubuntu@ip-127-0-0-1-:~$ dpkg --search /lib/x86_64-linux-gnu/security/pam_motd.so
libpam-modules:amd64: /lib/x86_64-linux-gnu/security/pam_motd.so
ubuntu@ip-127-0-0-1:~$ apt-get source libpam-modules
Reading package lists... Done
Picking 'pam' as source package instead of 'libpam-modules'
NOTICE: 'pam' packaging is maintained in the 'Bzr' version control system at:
https://code.launchpad.net/~ubuntu-core-dev/pam/ubuntu
Please use:
bzr branch https://code.launchpad.net/~ubuntu-core-dev/pam/ubuntu
to retrieve the latest (possibly unreleased) updates to the package.
Need to get 1993 kB of source archives.
Get:1 http://us-east-1.ec2.archive.ubuntu.com/ubuntu bionic-updates/main pam 1.1.8-3.6ubuntu2.18.04.1 (dsc) [2212 B]
Get:2 http://us-east-1.ec2.archive.ubuntu.com/ubuntu bionic-updates/main pam 1.1.8-3.6ubuntu2.18.04.1 (tar) [1990 kB]
Fetched 1993 kB in 0s (7884 kB/s)
dpkg-source: info: extracting pam in pam-1.1.8
dpkg-source: info: unpacking pam_1.1.8-3.6ubuntu2.18.04.1.tar.gz
</code></pre></div>
<p>What we're left with is a package description file, a source tarball, and a directory.</p>
<div class="highlight"><pre><span></span><code>ubuntu@ip-127-0-0-1:~$ ls -l
total 1952
drwxrwxr-x 15 ubuntu ubuntu 4096 Feb 27 14:26 pam-1.1.8
-rw-r--r-- 1 ubuntu ubuntu 2212 Feb 28 13:33 pam_1.1.8-3.6ubuntu2.18.04.1.dsc
-rw-r--r-- 1 ubuntu ubuntu 1990490 Feb 28 13:33 pam_1.1.8-3.6ubuntu2.18.04.1.tar.gz
</code></pre></div>
<p>Delving into the directory, some exploration reveals that the source code for <code>pam_motd.so</code> is in <code>pam-1.1.8/modules/pam_motd/pam_motd.c</code>. Does that file have anything to do with <code>/etc/update-motd.d</code>? Let's grep that sucker and see:</p>
<div class="highlight"><pre><span></span><code>ubuntu@ip-127-0-0-1:~/pam-1.1.8/modules/pam_motd$ grep update-motd pam_motd.c
ubuntu@ip-127-0-0-1:~/pam-1.1.8/modules/pam_motd$
</code></pre></div>
<p>Hmm. Nope. Okay, I know that Debian and Ubuntu source packages contain the software's pristine upstream source and that any changes that the distributions make to the package are shipped in the <code>debian</code> directory as patches. Let's check out that angle.</p>
<div class="highlight"><pre><span></span><code>ubuntu@ip-127-0-0-1:~/pam-1.1.8/debian/patches-applied$ grep update-motd *
series:update-motd
series:update-motd-manpage-ref
update-motd:Provide a more dynamic MOTD, based on the short-lived update-motd project.
update-motd:+ /* Run the update-motd dynamic motd scripts, outputting to /run/motd.dynamic.
update-motd:+ if (do_update && (stat("/etc/update-motd.d", &st) == 0)
update-motd:+ if (!system("/usr/bin/env -i PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin run-parts --lsbsysinit /etc/update-motd.d > /run/motd.dynamic.new"))
update-motd:+ Don't run the scripts in <filename>/etc/update-motd.d</filename>
update-motd:+ Don't run the scripts in /etc/update-motd.d to refresh the motd file.
update-motd-manpage-ref:+ <refentrytitle>update-motd</refentrytitle><manvolnum>5</manvolnum>
update-motd-manpage-ref:+\fBupdate-motd\fR(5)
</code></pre></div>
<p>Bingo! Let's have a look at the <code>update-motd</code> patch, that probably has what we want. Sure enough, here's the code that runs the files in <code>/etc/update-motd.d</code>:</p>
<div class="highlight"><pre><span></span><code>+ /* Run the update-motd dynamic motd scripts, outputting to /run/motd.dynamic.
+ This will be displayed only when calling pam_motd with
+ motd=/run/motd.dynamic; current /etc/pam.d/login and /etc/pam.d/sshd
+ display both this file and /etc/motd. */
+ if (do_update && (stat("/etc/update-motd.d", &st) == 0)
+ && S_ISDIR(st.st_mode))
+ {
+ mode_t old_mask = umask(0022);
+ if (!system("/usr/bin/env -i PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin run-parts --lsbsysinit /etc/update-motd.d > /run/motd.dynamic.new"))
+ rename("/run/motd.dynamic.new", "/run/motd.dynamic");
+ umask(old_mask);
}
</code></pre></div>
<p>In plain english, what this does is essentially this: If <code>do_update</code> is true (presumably because the <code>noupdate</code> option was not set) and if <code>/etc/update-motd.d</code> exists and is a directory, execute the contents of <code>/etc/update-motd.d</code> via <code>run-parts</code> and drop the output into <code>/run/motd-dynamic</code>.</p>
<p>So we find yet again that the man pages weren't lying, but it could have saved us some effort if the <code>pam_motd</code> man page stated more explicitly that the scripts in <code>/etc/update-motd.d</code> are <em>always</em> run unless the <code>noupdate</code> option is specified.</p>
<p>You can always dig deeper and deeper into a rabbit hole like this one until you wind up wandering aimlessly through the field of particle physics but this is about as far as I'm willing to take this one. All of my major questions in the beginning and along the way have been answered well enough. This was a fun diversion, even though the practical upshot is relatively trivial. If you stuck with it through the end, you should now be relatively well-equipped to explore similar rabbit holes yourself. Just remember that at the end, when you finally solve them, try to go easy on the champagne.</p>Graphite and the Energy Bridge to Nowhere2018-07-06T00:00:00-04:002023-12-10T00:49:37-05:00Charlestag:None,2018-07-06:articles/2018/July/graphite-and-the-energy-bridge-to-nowhere.html<figure>
<img src="images/energy-bridge/energy_bridge.png">
</figure>
<p>Remember the good old days when neighbors knew each other, cars had chrome bumpers, and nobody had any idea how much electricity they were using until a bill came in the mail? Well, a few years back, <a href="https://www.dteenergy.com">my local power company</a> started offering <a href="https://www.newlook.dteenergy.com/wps/wcm/connect/dte-web/insight/insight-app/">a mobile app</a> that customers could use …</p><figure>
<img src="images/energy-bridge/energy_bridge.png">
</figure>
<p>Remember the good old days when neighbors knew each other, cars had chrome bumpers, and nobody had any idea how much electricity they were using until a bill came in the mail? Well, a few years back, <a href="https://www.dteenergy.com">my local power company</a> started offering <a href="https://www.newlook.dteenergy.com/wps/wcm/connect/dte-web/insight/insight-app/">a mobile app</a> that customers could use to track their energy usage. This was made possible by the introduction of smart meters which transmit (by radio) each customer's electricity usage to the power company at a rate of once per hour or so. With this app, it is possible to see your actual real-time usage if you also install this little device they call an "energy bridge" in your home<sup id="fnref:1"><a class="footnote-ref" href="#fn:1">1</a></sup>.</p>
<p>In a nutshell, the energy bridge is just a little square box with a power connector and Ethernet jack. It contains a <a href="https://en.wikipedia.org/wiki/Zigbee">Zigbee</a> radio that speaks softly to the smart meter to get the current energy usage. The bridge device then carefully packages the usage information and sends it up to The Cloud where the mobile app on your phone can lovingly harvest the data and knead it into beautiful graphs and all of that crap.</p>
<figure>
<img src="images/energy-bridge/insight.png">
</figure>
<p>While using the device, I gained a lot of insight (zing!) as to where my energy usage was. For example, I now know that my house's electricity usage is at least 4 kW when the air conditioner is running. The next biggest energy-hungry device is the dishwasher which uses a little over 2 kW when it's running<sup id="fnref:2"><a class="footnote-ref" href="#fn:2">2</a></sup>. The house's idle power draw tends to be between 300 and 500 watts so clearly I have some sleuthing to do.</p>
<p>So, in that regard, it's been a dandy little device and I'd like to continue using it. Unfortunately the power company has deprecated this energy bridge. The mobile app no longer displays real-time usage and there's an info page telling me I that they now offer an upgraded model, which is also "free", but you have to pay $1 per month to use it. I'm sure that's a great deal for a lot of people. But as you may already know, I'm an unapologetic cheapskate. So let's get started on a solution that's half as useful and ten times the effort, shall we?</p>
<p>Through friends at work, I learned that even though the old bridge won't work with the power company's new cloud stuff, it can still be accessed locally over the network. Port 80 (HTTP) is open but the root URL only returns a <a href="https://en.wikipedia.org/wiki/HTTP_404">404</a>. I don't know how they were discovered but people have found endpoints that return various things:</p>
<ul>
<li><code>/status</code> returns a good amount of JSON-formatted info about the device.</li>
<li><code>/instantaneousdemand</code> returns the current energy usage as a string.</li>
<li><code>/history/since/<epoch seconds></code> binary output of some kind.</li>
<li><code>/history/since/{epoch_seconds}/until/{epoch_seconds}</code> more binary output.</li>
</ul>
<p>So far as I've been able to gather, nobody has figured out the binary output of the <code>/history</code> endpoints yet. But in any case, the most important one here is <code>/instantaneousdemand</code>, which returns nothing more than a fixed-point string and the suffix "kW". Here's mine, with the AC firing on all cylinders:</p>
<div class="highlight"><pre><span></span><code>$<span class="w"> </span>curl<span class="w"> </span>-sSL<span class="w"> </span>http://192.168.0.199/instantaneousdemand
<span class="m">000004</span>.313<span class="w"> </span>kW
</code></pre></div>
<p>Careful experiments (read: wild-ass guestimation) have revealed that there is about a 3-second delay between the time that some power-hungry device is turned on and the time that the energy bridge changes its reading. Additionally, it does not appear to update the reading more often than about every 3 seconds. So we have about a 3-second granularity at best. Not quite real-time but good enough for government work.</p>
<p>Since this is something we can query, it's something we can store and graph as well. These days you can't even walk down the street without a home automation framework stopping you and telling you that it's sunny outside today but I'm going the more DIY route and will attempt to pipe the energy bridge data into <a href="https://graphiteapp.org/">Graphite</a>, a collection of software components for collecting and reporting time-series data.</p>
<p><strong>Disclaimer</strong>: "Graphite" is actually an interconnected legion of daemons, programs, and libraries. One of these (the web interface) is also called "Graphite." This poses a bit of an ambiguity problem, as you might imagine. I don't know about you, but I don't have time for all of that, so I am <em>only</em> going to use "Graphite" to refer to the system as a whole and am deliberately choosing to ignore the names of the subsystems and libraries for the remainder of the article for your sanity. You're welcome.</p>
<p>Graphite is most certainly overkill for graphing a single metric but my motives are decidedly ulterior in that I wanted to tinker with graphite a bit anyway. If real home monitoring is your aim, you would be better off taking the rest of the information here and just piping it into your home automation doodad of choice.</p>
<p>Graphite is actually not a trivial thing to set up for a proper production-quality deployment. But today I'm going to cheat and use <a href="https://www.docker.com/">Docker</a> as the least frictional way to get something useful going. If you're following along with me, you'll need to <a href="https://docs.docker.com/install/">install Docker</a> first. Before we can fire up Graphite, we need a smidge of configuration first. The default retention for metrics is this:</p>
<div class="highlight"><pre><span></span><code>retentions = 10s:6h,1m:6d,10m:1800d
</code></pre></div>
<p>This says, "Retain one value every ten seconds for the last six hours, and then after that, one value every minute for the last six days, and after that, one value every ten minutes for the last (almost) five years. And then nothing after that." Since we know we can get at least three seconds of resolution out of the bridge, we'd like to bump the retention down to at least that much in order to get as close to real time as we can. Create the following configuration file as <code>storage-schemas.conf</code> and we'll pass it into the container at start up. Since this is a demo, we're just going to throw away all data past 24 hours.</p>
<div class="highlight"><pre><span></span><code># Carbon's internal metrics. This entry should match what is specified in
# CARBON_METRIC_PREFIX and CARBON_METRIC_INTERVAL settings
[carbon]
pattern = ^carbon\.
retentions = 10s:6h,1m:90d
[default_retention]
pattern = .*
retentions = 3s:24h
</code></pre></div>
<p>Then we just spin up the container with the ports we care about:</p>
<div class="highlight"><pre><span></span><code>docker<span class="w"> </span>run<span class="w"> </span><span class="se">\</span>
<span class="w"> </span>-p<span class="w"> </span><span class="m">80</span>:80<span class="w"> </span><span class="se">\</span>
<span class="w"> </span>-p<span class="w"> </span><span class="m">2003</span>:2003<span class="w"> </span><span class="se">\</span>
<span class="w"> </span>-v<span class="w"> </span><span class="nv">$PWD</span>/storage-schemas.conf:/opt/graphite/conf/storage-schemas.conf<span class="w"> </span><span class="se">\</span>
<span class="w"> </span>graphiteapp/graphite-statsd
</code></pre></div>
<p>If you're not aware, the -v argument mounts a local file (or directory) on the host system inside the filesystem of the container. The <code>-p</code> argument to docker maps your host system ports to the container ports. So this is just telling docker to hook up port 80 on <code>localhost</code> to port 80 inside the container. Port 80 is HTTP web interface to Graphite and port 2003 is where we'll be sending our data. If you take a moment to open up your web browser to http://localhost/, you should see this:</p>
<figure>
<a href="images/energy-bridge/graphite_fresh.png">
<img src="images/energy-bridge/graphite_fresh_640.png">
</a>
</figure>
<p>It's always disheartening to not have any data, so let's enhearten ourselves up a little bit. One way that we can get data into Graphite is to just blast it into a TCP socket on port 2003 in the following format:</p>
<div class="highlight"><pre><span></span><code><metric_name> <metric_value> <timestamp>\n
</code></pre></div>
<p>The <code>metric_name</code> is how we refer to the metric. When you have lots of metrics, you typically organize them into a hierarchy (a tree), much like files on a filesystem. This keeps them both organized and descriptive. One such metric might be <code>cluster.production.node42.cpu.load.5_min</code> or so. For graphing the power usage of our house, we'd be happy enough with something simple like <code>house.power.current_usage</code>.</p>
<p>The <code>metric_value</code> sounds like what it is. The value of the thing we're going to store and graph.</p>
<p>The <code>timestamp</code> is a simply the time associated with the metric. If you're stuffing data into Graphite as you receive it, as we are, this will be the current time. We send it as an integer representing the number of seconds elapsed since [the epoch].</p>
<p>We can spoon feed these into Graphite one at a time with nothing more than a simple <code>netcat</code> command:</p>
<div class="highlight"><pre><span></span><code><span class="nb">echo</span><span class="w"> </span><span class="s2">"house.power.current_usage </span><span class="nv">$current_value</span><span class="s2"> </span><span class="k">$(</span>date<span class="w"> </span>+%s<span class="k">)</span><span class="s2">"</span><span class="w"> </span><span class="p">|</span><span class="w"> </span>nc<span class="w"> </span>-q0<span class="w"> </span>localhost<span class="w"> </span><span class="m">2003</span>
</code></pre></div>
<p>So now we know:</p>
<ul>
<li>how to get the current power usage from the energy bridge,</li>
<li>how to set up a quick-and-dirty graphite instance, and</li>
<li>how to get the data into Graphite.</li>
</ul>
<p>Like everything I do in my life, there are better ways to go about this but for the sake of simplicity and illustration , we're going to whip up a little shell script to take care of that last bit. If you know what you are doing with Python, Ruby, Delphi, or what have you, then you are encouraged to do it.</p>
<div class="highlight"><pre><span></span><code><span class="ch">#!/usr/bin/env bash</span>
<span class="c1"># grab energy usage info from the energy bridge</span>
<span class="c1"># and send it to graphite.</span>
<span class="nb">readonly</span><span class="w"> </span><span class="nv">bridge_url</span><span class="o">=</span>http://192.168.0.199/instantaneousdemand
<span class="nb">readonly</span><span class="w"> </span><span class="nv">graphite_metric</span><span class="o">=</span>house.power.current_usage
<span class="nb">readonly</span><span class="w"> </span><span class="nv">graphite_host</span><span class="o">=</span>localhost
<span class="nb">readonly</span><span class="w"> </span><span class="nv">graphite_port</span><span class="o">=</span><span class="m">2003</span>
<span class="nb">readonly</span><span class="w"> </span><span class="nv">interval</span><span class="o">=</span><span class="m">2</span><span class="w"> </span><span class="c1"># seconds</span>
<span class="k">while</span><span class="w"> </span>:<span class="p">;</span><span class="w"> </span><span class="k">do</span>
<span class="w"> </span><span class="nv">current_value</span><span class="o">=</span><span class="k">$(</span>curl<span class="w"> </span>-sSL<span class="w"> </span><span class="nv">$bridge_url</span><span class="w"> </span><span class="p">|</span><span class="w"> </span>cut<span class="w"> </span>-f1<span class="w"> </span>-d<span class="s1">' '</span><span class="k">)</span>
<span class="w"> </span><span class="nb">echo</span><span class="w"> </span><span class="s2">"</span><span class="nv">$graphite_metric</span><span class="s2"> </span><span class="nv">$current_value</span><span class="s2"> </span><span class="k">$(</span>date<span class="w"> </span>+%s<span class="k">)</span><span class="s2">"</span><span class="w"> </span><span class="p">|</span><span class="w"> </span><span class="se">\</span>
<span class="w"> </span>nc<span class="w"> </span>-q0<span class="w"> </span><span class="nv">$graphite_host</span><span class="w"> </span><span class="nv">$graphite_port</span>
<span class="w"> </span>sleep<span class="w"> </span><span class="nv">$interval</span>
<span class="k">done</span>
</code></pre></div>
<p>This is basically automating the two things we already showed above--getting the data from the bridge and pushing it into Carbon. We have an infinite loop that fetches data from the bridge and pushes it to Graphite, with a <code>sleep</code> to keep it from also turning into a CPU busy loop.</p>
<p>You'll notice that <code>$interval</code> is 2 seconds, not 3. Err... whatnow? Well, Graphite's time-series database expects a data point <em>at least</em> every 3 seconds based on the configuration that we gave it above. If the script doesn't keep up with that, we end up with null values in the dasebase which aren't a huge problem but can make the graph look weird. Remember also that we're hitting the bridge on every loop through the script and we don't have much control over how long it takes for that thing to respond. So that extra second acts as a buffer against various delays in the delivery of the metrics.</p>
<p>Now if we let that run for a few minutes, we can then go back to our browser window and reload graphite. There's a new folder underneath the "Metrics" tree called "house". Expand that, and then "power, and then we're finally at our metric called "current_usage." If we click on that, we are rewarded for all of our efforts with... a thin blue line!</p>
<figure>
<a href="images/energy-bridge/thin_blue_line.png">
<img src="images/energy-bridge/thin_blue_line_640.png">
</a>
</figure>
<p>By default Graphite is showing us a graph of the last 24 hours, when we've only been jamming stats into it for a few minutes. In order to see anything interesting, we have to manually select our time period by clicking on the clock icon and selecting a more appropriate time range. Let's say 15 minutes. With that done, we have a much cooler graph. It's quite obvious when exactly my AC kicked on:</p>
<figure>
<a href="images/energy-bridge/ten_minutes.png">
<img src="images/energy-bridge/ten_minutes_640.png">
</a>
</figure>
<p>There are a whole bunch of ways we can customize this graph, just start clicking around to see some of them. Additionally, Graphite has an API that allows you to fetch any graph in a variety of formats and embed it elsewhere. A widget on your phone. A daily email to yourself. Or you could even display it on a digital dashboard in your kitchen to remind your family how much money is being wasted when lights and appliances are absent-mindedly left on, to pick a totally random and completely fictional example, I assure you, if you are reading this, dear.</p>
<p>Of course if you wanted to make this a permanent installation, there's still lots to do to make it easier to deploy, resilient in the face of failure, and so on. But those parts are boring and are therefore best left as an exercise to the reader.</p>
<div class="footnote">
<hr>
<ol>
<li id="fn:1">
<p>The app also tried to gamify your <em>energy usage experience</em> by offering achievements, points, levels, goals, and more. It was obnoxious to the point of making the core functionality of the app (tracking energy usage) basically unusable. At one point, the app would crash while trying to display some 143 or so "achievements" that I had obtained while taking literally no deliberate action. And there was no way to opt out of it. Blessedly, it looks like that whole thing was scrapped as of the current version of the app. <a class="footnote-backref" href="#fnref:1" title="Jump back to footnote 1 in the text">↩</a></p>
</li>
<li id="fn:2">
<p>We've all heard the factoid that running a dishwasher uses less water than washing dishes by hand, which is technically true. Missing from this is the fact that dishwashers also use a crap-ton of electricity to heat up the water which actually makes them soulless tree-eating automatons.</p> <a class="footnote-backref" href="#fnref:2" title="Jump back to footnote 2 in the text">↩</a></p>
</li>
</ol>
</div>Rebuilding Docker for Custom Networks, a SysAdmin Tale2018-05-04T00:00:00-04:002023-12-10T00:49:37-05:00Charlestag:None,2018-05-04:articles/2018/May/rebuilding-docker-for-custom-networks-a-sysadmin-tale.html<p>At work, we use <a href="https://www.docker.com/">Docker</a> for developing, testing, and deploying applications. For the most part, it has simplified our lives greatly at the cost of a few annoyances and headaches here and there. I just spent the better part of a week dealing with one of them and since misery …</p><p>At work, we use <a href="https://www.docker.com/">Docker</a> for developing, testing, and deploying applications. For the most part, it has simplified our lives greatly at the cost of a few annoyances and headaches here and there. I just spent the better part of a week dealing with one of them and since misery loves company, I'm going to share the pain with you. Aren't you excited?</p>
<p>Although we're looking into other options, right now we use <a href="https://docs.docker.com/compose/">Docker Compose</a> to deploy applications invididually to one of several application hosts. Docker Compose, if you are not already familiar with it, essentially lets you deploy a collection of Docker containers as a single service. So for a blog, to pick a totally random and arbitrary example, you will have a container for the blog engine itself, another for the database it connects to, another one to handle the reverse proxy for TLS, perhaps yet another for ElasticSearch, and so on.</p>
<p>Now, when you deploy a multi-containered application like this with Compose, it creates a network so that all of the containers can talk to one another. This is an isolated network that can reach back out to the rest of the world via NAT. The only way traffic gets into this network from the outside is when you map ports into a container in your <code>docker-compose.yml</code> file. If you don't tell Compose which network to use for the set of containers, it will happily create one for you. It does this by pulling networks out of a default pool of <code>172.[17-31].0.0/16</code> and <code>192.168.[0-240].0/20</code>.</p>
<p>As long as your Docker host never needs to communicate with anything outside of itself on those subnets, you are fine. (Can you see where this is going? I bet you can see where this is going.) At my day job, however, an edict came down from on high that a bunch of our infrastructure had to be moved into the <code>172.18/12</code> space. Like me, you might think, "Well this is fine, because those docker networks are all internal to the host." Like me, you'd be wrong. Because when Docker creates these networks, it does so by creating a bridge for each one. Which adds an entry to the kernel's routing table for those networks. Which means if <em>anything</em> on the host wants to talk to something in <code>172.18/12</code>, it can't because the routing table says those IPs are not reachable via the default gateway:</p>
<div class="highlight"><pre><span></span><code>default via 10.1.120.1 dev wlp1s0 proto static metric 600
10.1.120.0/23 dev wlp1s0 proto kernel scope link src 10.1.121.89 metric 600
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
172.18.0.0/16 dev br-415a70ddd828 proto kernel scope link src 172.18.0.1
</code></pre></div>
<p>In the above example, <code>172.17.0.0/16</code> is our <code>docker0</code> network. We're going to ignore that for now. The relevant network is <code>172.18.0.0/16</code>, which was created automatically by Docker Compose in response to deploying a new application.</p>
<p>In a fictional utopia of rainbow waterfalls, free beer and reasonable software engineering practices you would be able to tell Docker, "Hey there, buddy. Any chance you can use a different IP space for those internal networks you like to create?" And Docker would say, "Of course, friend! Just put that awesome stuff in the config file and I'll take care of everything. Also, here's some more beer!"</p>
<p>However, in the universe we presently inhabit, Docker is a bit of a jerk. Those 172.x and 192.168.x pools are literally hard-coded into docker. To be fair, there is <a href="https://github.com/moby/moby/pull/36396">work underway</a> to be able to specify default address pools. This is good but <em>as I write this</em> it's a little too late to help me and others like me who are facing this IP space conflict now. It has not yet been merged so it's hard to say when it will hit an actual release.</p>
<p>What are our options right now?</p>
<h2>Manually Specify Application Networks</h2>
<p>In the <code>docker-compose.yml</code> file for each application we can specify the IP space of the network it will use. There are two obvious drawbacks to this, though. The first is that if you maintain the deployments of, say, 50-ish applications, individually mangling the config files for each of these is a huge pain. Moreover, you would have to track these assignments somehow (e.g. in everyone's favorite database, a spreadsheet, perhaps) because they can't overlap on the same host. Frankly, I'd rather stab myself in the ear.</p>
<h2>Hax0r Your Routing Table</h2>
<p>If there are only certain subnets within Docker's default address pool that you need to avoid, you can add those to the routing table of the host and that will cause Docker to skip them when automatically creating its networks. This is what I did as a short-term fix. The reason it's a short term fix in my case is because ideally I don't want Docker to be using anything in its currently hard-coded list. The organization I'm working for for uses the entirety of <em>all three</em> of the well-known private IPv4 address spaces on its internal network and these are expected to be routable from everywhere. I mean, I'm no network architect but let's just say that's not what I would have done.</p>
<h2>Wait for Docker to Support Custom Address Pools</h2>
<p>This is the option that involves the least amount of work, so if you can get away with waiting for a release that supports custom address pools, then congratulations on being lazy. But seriously, if you're not facing any show-stopping IP conflicts with the default pool then this is the most reasonable option by far. </p>
<h2>Patch and Rebuild Docker</h2>
<p>Since the default address space is hard-coded into docker, a viable option is to patch Docker to use a different space. Here is roughly how. Whenever I start a project like this, I like to make a series of laughable assumptions:</p>
<p>1) You're running Docker on Ubuntu Xenial 16.04. Although this should work fine for any OS that Docker officially supports, this is what I tested it on and it's what I'll show.</p>
<p>2) You're running release 18.03-1-ce of Docker, which is the latest release as of this writing. Again, the same general idea applies to other versions of Docker but the code might be different, or the original problem might be fixed in newer versions, etc.</p>
<p>3) You're running Docker CE, not EE or any other variant. Because I don't know how to build the others.</p>
<p>If you don't have an Ubuntu 16.04 machine handy, blindly follow my expert example and fire one up in a VM. Note that this VM is going to need unhindered access to the Internet since the Docker build process fetches git repositories, docker images, and cute baby goat videos for all I know. You'll want something in the vicinity of 4 CPU cores, 8 GB of RAM, and 20 GB of disk space. Less might work fine. More is better. A speedy Internet connection is highly recommended.</p>
<p>The first thing you need to do is install a few dependencies: git, GNU make, and docker-ce. Yes, you read that right. You need Docker to build Docker. Don't ask why; this is not a rabbit hole we're going to throw ourselves down today. If some version of Docker is installed already and it's not the one we're building, stop any currently running containers and uninstall it. </p>
<p>We'll start with setting up the Docker apt repository. The details for this are <a href="https://docs.docker.com/install/linux/docker-ce/ubuntu/#set-up-the-repository">over here</a> but here's the simplified version:</p>
<div class="highlight"><pre><span></span><code># download and add the docker apt repository GPG key
wget -O - https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
# add the docker apt repo to your system
echo "deb [arch=amd64] http://download.docker.com/linux/ubuntu xenial stable" | sudo tee /etc/apt/sources.list.d/docker.list
# update the package index
sudo apt update
</code></pre></div>
<p>Next we install git, make, and Docker:</p>
<div class="highlight"><pre><span></span><code>sudo apt -y install git make docker-ce
</code></pre></div>
<p>Then check out the <code>docker-ce</code> git repo and <code>cd</code> into it:</p>
<div class="highlight"><pre><span></span><code>git clone https://github.com/docker/docker-ce.git
cd docker-ce
</code></pre></div>
<p>Because we want to modify the release that we're using, switch to the tag corresponding to it. Use <code>docker --version</code> to see yours. In our case, that's version <code>18.03.1-ce</code> (note that the version string does not start with a <code>v</code> but the tag does):</p>
<div class="highlight"><pre><span></span><code>git checkout v18.03.1-ce
</code></pre></div>
<p>Now we can patch the Docker source. If you haven't heard, Docker is written in <a href="https://golang.org/">Go</a> so knowing it will be helpful, although not strictly necessary such a simple change as this. The file containing the IP address pools is <code>components/engine/vendor/github.com/docker/libnetwork/ipamutils/utils.go</code>. Let's edit that and see what we get. If you're just following along for entertainment (you weirdo) you can see the same thing <a href="https://github.com/docker/docker-ce/blob/v18.03.1-ce/components/engine/vendor/github.com/docker/libnetwork/ipamutils/utils.go">here</a></p>
<div class="highlight"><pre><span></span><code>vim components/engine/vendor/github.com/docker/libnetwork/ipamutils/utils.go
</code></pre></div>
<p>Here are the important parts:</p>
<div class="highlight"><pre><span></span><code>var (
// PredefinedBroadNetworks contains a list of 31 IPv4 private networks with host size 16 and 12
// (172.17-31.x.x/16, 192.168.x.x/20) which do not overlap with the networks in `PredefinedGranularNetworks`
PredefinedBroadNetworks []*net.IPNet
// PredefinedGranularNetworks contains a list of 64K IPv4 private networks with host size 8
// (10.x.x.x/24) which do not overlap with the networks in `PredefinedBroadNetworks`
PredefinedGranularNetworks []*net.IPNet
initNetworksOnce sync.Once
defaultBroadNetwork = []*NetworkToSplit{{"172.17.0.0/16", 16}, {"172.18.0.0/16", 16}, {"172.19.0.0/16", 16},
{"172.20.0.0/14", 16}, {"172.24.0.0/14", 16}, {"172.28.0.0/14", 16},
{"192.168.0.0/16", 20}}
defaultGranularNetwork = []*NetworkToSplit{{"10.0.0.0/8", 24}}
)
</code></pre></div>
<p><code>defaultBroadNetwork</code> is the thing we're interested in. Notice that it's a list of several networks. Each list element contains an IP address range in <a href="https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing">CIDR</a> notation and the size of the network. For example, if we take <code>{"172.20.0.0/14", 16}</code>, then <code>172.20.0.0/14</code> is the range and <code>16</code> is the size (number of bits) of each network to dole out from the total range given. To further illustrate, <code>172.20.0.0/14</code> represents the range of addresses from <code>172.20.0.0</code> to <code>172.23.255.255</code> and if we specify a network size of <code>16</code>, Docker will allocate the following networks from that range:</p>
<div class="highlight"><pre><span></span><code>172.20.0.0 - 172.20.255.255
172.21.0.0 - 172.21.255.255
172.22.0.0 - 172.22.255.255
172.23.0.0 - 172.23.255.255
</code></pre></div>
<p>In my case, I want to pick a network that has no chance of being routable anywhere <a href="https://en.wikipedia.org/wiki/Private_network#Private_IPv4_address_spaces">on the organization's internal network</a> or on the Internet. There are two candidates here: the link-local address block <code>169.254/16</code> and and <a href="https://en.wikipedia.org/wiki/Carrier-grade_NAT">carrier-grade NAT</a> block <code>100.64/10</code>.</p>
<p>I can already see the pedantic contingent rising up from their Aeron chairs and shaking their pocket protectors in blind fury for making such a bold suggestion but please, everyone, let's just keep our cool for one moment. Technically speaking, this is not an RFC-approved use of either of these spaces. I acknowledge that. But this is America, dammit, and since these spaces by their very definition do not route to the Internet under normal circumstances, they are perfectly cromulent to use in a scenario where they are further restricted to an isolated virtual network on a single host. So really, just relax, it will all be fine.</p>
<p>Of these two options, I think <code>169.254/16</code> is the slightly better choice for two reasons: 1) it is instantly recognizable to most other admins as a non-routable network, and 2) there's a tiny but not impossible chance that you're doing Docker somewhere on or near a CGNAT space. I mean, CGNAT sucks but <em>not</em> torpedoing the network probably takes precidence over angst if you like staying employed.</p>
<p>However, for the purposes of illustration, I'm going to use the CGNAT space <code>100.64/10</code> because there is a non-zero chance that the <code>169.254/16</code> space <a href="https://askubuntu.com/questions/893097/how-to-get-rid-of-169-254-0-0-route">already has an entry in your routing table</a>. Now let's press forward by rejecting Docker's reality and substituting our own:</p>
<div class="highlight"><pre><span></span><code>defaultBroadNetwork = []*NetworkToSplit{{"100.64.0.0/16", 24}}
</code></pre></div>
<p>This defines the address range <code>100.64.0.0 - 100.64.255.255</code> and tells Docker to grab a <code>/24</code> out of it every time it needs a network. This gives us 256 networks with 256 addresses in each. We save this change, update the comments if so inclined, and then rebuild Docker. Since we're on Ubuntu, we can tell the build system to build the whole thing and thing spit out a <code>.deb</code> package at the end. We have to specify the <code>DOCKER_BUILD_PKGS</code> variable because if we leave that out, it will try to build Docker (and packages) for <em>every</em> OS and platform combination it knows about. And that takes longer than you'd like.</p>
<div class="highlight"><pre><span></span><code>make DOCKER_BUILD_PKGS=ubuntu-xenial deb
</code></pre></div>
<p>Once your computer has done a bunch of computing, it's a simple matter of installing the package you just built. If you have any docker containers running, now would be an awesome time to stop them.</p>
<div class="highlight"><pre><span></span><code># stop docker
sudo service docker stop
# remove old docker
sudo apt -y remove docker-ce
# install the newly-built docker
sudo dpkg -i ./components/packaging/deb/debbuild/ubuntu-xenial/docker-ce_18.03.1~ce-0~ubuntu_amd64.deb
# start docker back up
sudo service docker start
</code></pre></div>
<p>One thing to keep in mind is that an <code>apt dist-upgrade</code> may very well overwrite your hacked docker package with one from the docker repository. To keep that from happening, you can tell apt to keep its grubby mitts off it:</p>
<div class="highlight"><pre><span></span><code>sudo apt-mark hold docker-ce
</code></pre></div>
<p>Now then, let's test this puppy out and make sure it actually works. First, we can just create a network and see if it inhabits the right network space:</p>
<div class="highlight"><pre><span></span><code>docker network create foobar
</code></pre></div>
<p>If it worked, we'll see it when we list the networks:</p>
<div class="highlight"><pre><span></span><code>$ docker network ls
NETWORK ID NAME DRIVER SCOPE
adf186946399 bridge bridge local
370a94d1a1f1 foobar bridge local
be998f0e4c82 host host local
ad949b38a540 none null local
</code></pre></div>
<p>We can verify that it took the right subnet by inspecting the network:</p>
<div class="highlight"><pre><span></span><code>$ docker network inspect foobar
...snip...
"Subnet": "100.64.0.0/24",
"Gateway": "100.64.0.1"
...snip...
</code></pre></div>
<p>And just to verify that containers attached to this redefined IP space can actually talk to one another, let's install Docker Compose and set up a test deployment of two containers.</p>
<div class="highlight"><pre><span></span><code># Install Python 3 pip
sudo apt -y install python3-pip
# Install docker-compose
sudo pip3 install docker-compose
# Create directory for test deployment
mkdir ~/docker-net-test
cd ~/docker-net-test
</code></pre></div>
<p>Paste the following file as <code>docker-compose.yml</code>:</p>
<div class="highlight"><pre><span></span><code>version: "2"
services:
foo:
image: busybox
entrypoint: tail -f /dev/null
bar:
image: busybox
entrypoint: tail -f /dev/null
</code></pre></div>
<p>Bring up the deployment:</p>
<div class="highlight"><pre><span></span><code>$ docker-compose up -d
Creating network "docker-net-test_default" with the default driver
Pulling foo (busybox:)...
latest: Pulling from library/busybox
f70adabe43c0: Pull complete
Digest: sha256:58ac43b2cc92c687a32c8be6278e50a063579655fe3090125dcb2af0ff9e1a64
Status: Downloaded newer image for busybox:latest
Creating docker-net-test_foo_1 ... done
Creating docker-net-test_bar_1 ... done
</code></pre></div>
<p>Now for the fun part. Exec into the container for service "foo":</p>
<div class="highlight"><pre><span></span><code>docker-compose exec foo sh
</code></pre></div>
<p>We can see which IP the container was assigned by running:</p>
<div class="highlight"><pre><span></span><code># ip addr show eth0
16: eth0@if17: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 02:42:64:40:01:02 brd ff:ff:ff:ff:ff:ff
inet 100.64.1.2/24 brd 100.64.1.255 scope global eth0
valid_lft forever preferred_lft forever
</code></pre></div>
<p>We got 100.64.1.2, which is exactly what we expected. Yay. Let's ping the other container and see if the network is actually functional:</p>
<div class="highlight"><pre><span></span><code># ping -c3 bar
PING bar (100.64.1.3): 56 data bytes
64 bytes from 100.64.1.3: seq=0 ttl=64 time=0.056 ms
64 bytes from 100.64.1.3: seq=1 ttl=64 time=0.074 ms
64 bytes from 100.64.1.3: seq=2 ttl=64 time=0.083 ms
--- bar ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.056/0.071/0.083 ms
</code></pre></div>
<p>How about that, eh? It actually worked! So there you have it. You've modified, built, installed, and tested Docker using an IP range that (theoretically) does not conflict with most internal networks and definitely should not route to the Internet. If you actually read through all of this, then congratulations because anyone who can stick around all the way through my wild ramblings deserves some kudos.</p>In Beaver We Trust: A Lengthy, Pedantic Review of Ubuntu 18.04 LTS2018-04-26T00:00:00-04:002023-12-10T00:49:37-05:00Charlestag:None,2018-04-26:articles/2018/April/in-beaver-we-trust-a-lengthy-pedantic-review-of-ubuntu-1804-lts.html<figure>
<img src="images/bionic-review/beav_640.jpg">
<figcaption>The Beav in all his geometric glory.</figcaption>
</figure>
<p>Even though I have been using some Linux distribution or another pretty much daily for roughly two decades, I have not had occasion to write a review on one. Today, opportunity knocks not once but 18.04 times.</p>
<h1>Background</h1>
<p><em>Just who in blazes …</em></p><figure>
<img src="images/bionic-review/beav_640.jpg">
<figcaption>The Beav in all his geometric glory.</figcaption>
</figure>
<p>Even though I have been using some Linux distribution or another pretty much daily for roughly two decades, I have not had occasion to write a review on one. Today, opportunity knocks not once but 18.04 times.</p>
<h1>Background</h1>
<p><em>Just who in blazes are you and what qualifies you to do a Linux distribution review?</em> you are likely asking yourself. Well then, answer the first: I am a Linux system administrator by trade. And have been now for what seems like ages. I'm an open source advocate who is incredibly set in my ways and I want the tools that I use to just work. I detest paying for software except when it occupies certain specialized cases or represents something more akin to work of art, such as the video game Portal. But only when it's on sale. Over the years I have made many contributions to a wide variety of projects in various forms, mostly in the form of bug reports and patches.</p>
<p>Answer the second: I have a blog. So there.</p>
<p>This review is going to be a little bit different from others in that I am completely unfamiliar with the modern stock Ubuntu desktop since GNOME 2 was abandoned. I started on Ubuntu back in the "badger" days when it was basically a lightly modified GNOME 2 desktop. Once Ubuntu's default desktop switched to Unity and <a href="https://www.gnome.org/gnome-3/">GNOME 3</a> was released at about the same time, I found immediately that neither of them would work for me. So I bounced around a bit between various desktops like <a href="https://github.com/linuxmint/Cinnamon">Cinnamon</a> and <a href="https://mate-desktop.org/">MATE</a> before finally settling on <a href="https://xfce.org/">XFCE</a> via <a href="https://xubuntu.org/">Xubuntu</a> over the past few years. It's not <em>quite</em> where GNOME 2 was before it was abandoned but it's pretty darn close. Today, we're going to see if <a href="https://www.ubuntu.com/download/desktop">Ubuntu 18.04</a> ("Bionic Beaver" or as I like to call it, "The Beav") and its GNOME 3 desktop is usable for someone as finicky as me.</p>
<h1>Criteria and Trial by Fire</h1>
<p>I am going to be judging Ubuntu 18.04 like an elderly lady judges canteloupe at the produce market. Carefully, with an experienced eye, and bearing no tolerance for ugly bits, soft spots, or flim-flam. These criteria may appear to be arbitrary to you but there are things that have caused me to give up on distros and desktops in the past. They are generally in the "<em>just make it freaking work already</em>" category that should not, really, be impossible to get right here in 2018. (a.k.a. THE FUTURE.)</p>
<p>What I'm <em>not</em> going to do is prattle on about what's new and exciting in this release compared to all the previous ones because frankly dear, I don't give a damn about all that.</p>
<p>Finally, a disclaimer: The criteria by which I judge this new Ubuntu are my own. That means that I weigh certain things that I care about quite heavily while things that most people care the most about might not get any mention at all. If you're looking for a balanced critique of overall usability for the populate at large, this ain't it.</p>
<h2>First Impressions</h2>
<p>I fired up Bionic in a virtual machine (KVM) to get a sense for whether or not I was going to hate it right out of the gate. My understanding is that 18.04 is basically GNOME 3 with some amount of tweaking, which I also have not used up until today.</p>
<figure>
<img src="images/bionic-review/first_boot_640.png">
</figure>
<p>Okay, not too bad. It was a little surpising that the icons in the upper-right area of the screen were basically all one big button instead of independent widgets but I think I can live with that. At this point, I don't know quite what "Activities" is, although there is a search bar in there which is apparently for looking up installed applications.</p>
<p>The bar on the left is obviously a vertical imitation of the Mac OS X dock. In the lower-left corner, there's an icon with a grid of dots that takes you to some kind of overlay thing that shows all the applications. You can search here too. I think I would have preferred an ordinary heirarchical menu here. The icons are gigantic and spaced far apart and any text longer than 10 characters gets cut off. I presume this is supposed to be another imitation of Apple design, the iPhone home screen.</p>
<p>I opened up a few terminals, web browser, etc just to see how the window management was going to work. The application that has focus shows up in the bar at the top but this is definitely not anything like the classic windows task bar. When you click on the application's name, you get a pull-down menu of some kind of random selection of actions. I'm not entirely sure what purpose this is supposed to serve since you could just as easily access these things from the application's window itself.</p>
<p>I notice that when you drag a window near the panel or the top of the screen, the panel and top bar go from translucent to almost solid gray. If there's a reason for why that happens, I can't tell what it is.</p>
<h2>Suspend</h2>
<p>The act of powering up a computer, waiting for it to boot, doing some work, and then waiting for it to shut down gracefully is a barbaric ritual from ancient times. In 2018, we're all modern and hip and just want to open up the laptop lid and get to work. Unfortunately this is easier said than done and as such it really only works reliably with the right combination of supported hardware. And even then, bugs in various layers of the OS can cause it to suddenly stop working consistently after an OS update. Ask me how I know.</p>
<ul>
<li>Suspend and resume must work, without crashing the X session, video, or anything else.<ul>
<li><span class="text-success">Pass.</span> The only issue I had is that pause button in the upper-right menu got replaced with a power button and I only discovered by accident that holding down "Alt" will change it to a suspend button. Bonus points for an option in the settings to allow the power button to suspend the machine as well.</li>
</ul>
</li>
<li>OS must suspend when laptop lid is closed, every time.<ul>
<li><span class="text-success">Pass.</span> This only works when the laptop is not connected to another display, which makes sense I guess. </li>
</ul>
</li>
</ul>
<h2>Multiple Display Support</h2>
<figure>
<img src="images/bionic-review/display_settings_640.png">
</figure>
<p>Both my work and personal machines are laptops. But I also have external monitors in both locations. There's zero point in even having a laptop on your desk if you can't pick it at a moment's notice and take it to a meeting and then come back without having to manually tell it about it's new reality vis-a-vis number of displays and manually rearranging windows on the screen each time something changes. GNOME 2 and MATE always did this perfectly every time. Not much else that I tried since has. XFCE has gotten close but only recently. Here's what I hope Bionic can accomplish without too much drama:</p>
<ul>
<li>When a second external display is connected, the OS should only ask me <em>once</em> how I want the desktop to be displayed across the screens. (Mirrored, extend the desktop, etc.) Every time thereafter, it should remember what I told it earlier (for that particular monitor) and just do the right thing.<ul>
<li><span class="text-success">Pass.</span> Ubuntu didn't actually ask but it did the right thing which was to put the second monitor off to the right of the laptop's display. And configuring the displays in the settings is extremely straightforward. High marks to whoever designed that bit of heaven.</li>
</ul>
</li>
<li>When the external display is disconnected, it should automatically shrink the size of the desktop and move all windows back to the laptop's display.<ul>
<li><span class="text-success">Pass.</span> Worked exactly as described.</li>
</ul>
</li>
<li>When the external display is connected again, the windows that were on it before should be automatically moved back to where they were if, and only if, they were not moved around by the user while the display was disconnected.<ul>
<li><span class="text-success">Pass.</span></li>
</ul>
</li>
<li>All of the above should work as described even if the displays come and go while the laptop is asleep. <a href="https://bugs.launchpad.net/ubuntu/+source/xorg-server/+bug/1557346">XFCE usually just crashes if you do this.</a><ul>
<li><span class="text-success">Pass</span> although it's possible that this is an intermittent problem that will show up in vanilla Ubuntu eventually too.</li>
</ul>
</li>
</ul>
<h2>Multiple Audio Device Support</h2>
<figure>
<img src="images/bionic-review/sound_settings_640.png">
</figure>
<p>My laptops have internal speakers and microphones along with ports on the devices themselves as well as the docks they plug into. I generally use all of them for different purposes at different times.</p>
<ul>
<li>I should hear sound out of the main speakers when I play a cat video on YouTube.<ul>
<li><span class="text-success">Pass.</span> Cat noises observed.</li>
</ul>
</li>
<li>When the laptop is docked, it should route sound through the audio ports on the dock.<ul>
<li><span class="text-success">Pass.</span></li>
</ul>
</li>
<li>When I plug in headphones, it should mute the speakers and route sound to the headphones and do the reverse when I unplug them.<ul>
<li><span class="text-success">Pass.</span></li>
</ul>
</li>
<li>The volume control should adjust the volume of the device that has the audio routed to it.<ul>
<li><span class="text-success">Pass.</span></li>
</ul>
</li>
</ul>
<p>All in all, I'm pleased that all of the general-purpose audio stuff in Ubuntu works just fine right out of the box. I remember having to monkey with pulseaudio settings to get this to work right in Xubuntu.</p>
<h2>Touchpad, Keyboard, Mouse Customization</h2>
<figure>
<img src="images/bionic-review/keyboard_settings_640.png">
<figcaption>There you are, you wily bugger.</figcaption>
</figure>
<p>The only reason this section is here is because in the latest release, XFCE (or Xubuntu) did something crazy with their input preferences handling and now it's all broken as hell. Even when I <em>can</em> set the input preferences to what I want them to be, they frequently revert to the defaults whenever the system is docked or undocked, suspended or resumed, a USB device is plugged in or unplugged, etc. I've had to write scripts to work around all of this.</p>
<ul>
<li>Let me specifiy the keyboard repeat rate in terms of hard numbers (e.g. 200 ms delay, 50 repeat rate) instead of just unlabeled sliders.<ul>
<li><span class="text-danger">Fail.</span> For one, these settings are hidden away under "Universal Access" (I'm not sure why "Accessibility" needed to be replaced with another new euphemism, but okay). This is a very odd place to put it. If you're going to hide a setting this basic and critical under here, then you might as well move all the mouse and touchpad preferences too. Secondly, these are just bare unlabeled sliders so I have no idea what the actual delay and speed are set to.</li>
</ul>
</li>
<li>Allow me to completely disable mouse acceleration and adjust the speed of the mouse accurately.<ul>
<li><span class="text-danger">Fail.</span> There is one generic "mouse speed" slider that seems to adjust both the speed and the acceleration at once. Although I can fiddle with it to get it somewhere in the area of what I like it, there are no labels anywhere on the slider to tell me where it's at should I want to use the same setting on a different machine. </li>
</ul>
</li>
<li>Allow me to enable/disable common touchpad features (edge scrolling, multi-touch gestures, etc) as well offer reasonable palm detection.<ul>
<li><span class="text-warning">Semi-pass.</span> I can enable and disable features but palm detection doesn't appear to work.</li>
</ul>
</li>
<li>Remember all of the changes I make to the defaults above across reboots, suspend/resume, docking/undocking, etc.<ul>
<li><span class="text-success">Pass.</span> All the preferences appear to stick.</li>
</ul>
</li>
</ul>
<h2>Window and Desktop Management</h2>
<figure>
<img src="images/bionic-review/activities_640.png">
<figcaption>I don't know what's real anymore.</figcaption>
</figure>
<p>I won't deny that in certain respects I am set in my ways. You didn't ask for them, but here are some of my ways.</p>
<ul>
<li>I should be able to enable focus-follows-mouse in the window manager preferences. (And this feature should work largely as expected, e.g. no focusing of the empty desktop itself or icons and so forth.)<ul>
<li><span class="text-danger">Fail.</span> There is no option to enable this in the standard settings window. It can be enabled with some additional software but unfortunately I found that it doesn't work smoothly enough to be useful.</li>
</ul>
</li>
<li>Easily resizable windows. XFCE defaults to one-pixel-width window borders. <em>ONE PIXEL.</em> Can you believe that?<ul>
<li><span class="text-success">Pass.</span> All four window borders are invisible but they're there.</li>
</ul>
</li>
<li>Multiple workspace support. Unix desktops have had this incredibly useful feature for ages.<ul>
<li><span class="text-danger">Fail.</span> There's this "Activities" thing that <em>looks</em> like it allows you to drag windows around to different desktops but from what I can tell, that only works on the primary display. Whatever you put on the secondary monitor stays on that monitor no matter which Activity is selected. Implementing this must have taken twice the amount of time as simple separate desktops and yet only offers half the usefulness. Further, to switch between activities takes a minimum of three mouse clicks. In XFCE/KDE, one can simply click the desired desktop in the workspace widget or flick the scroll wheel on the desktop. </li>
</ul>
</li>
<li>Windows that snap together. When I want two windows near each other, I almost always want them <em>right</em> near each others with no space in between.<ul>
<li><span class="text-success">Pass.</span> But just barely. The only "snappiness" you get is when you join two windows together. There's no magnetism or snappiness, it's more like a slight stickiness.</li>
</ul>
</li>
</ul>
<h2>SSH Agent</h2>
<p>As a system administrator, a large part of my job is using SSH to log into random hosts to check on or troubleshoot things. SSH keys are how any non-insane organization handles authentication. As such, SSH private keys <em>really, really</em> should be password encrypted but when unlocked should be stored in an agent so that you don't have to type the password every time you log into a host.</p>
<ul>
<li>When I successfully log into the desktop, the OS should use my password to try to unlock the SSH keys in my <code>~/.ssh/</code> directory and add them to a persistent SSH agent.<ul>
<li><span class="text-success">Pass.</span> When you first try to log into a host, it brings up a dialog that asks you to unlock the private key which also contains a check box to unlock the key when you log in. Very nice.</li>
</ul>
</li>
<li>Bonus points if the agent can handle key types besides DSA and RSA (last time I looked, <code>gnome-keyring-daemon</code> did not.)<ul>
<li><span class="text-success">Pass.</span> It handled my ed25519 just as well as my RSA key.</li>
</ul>
</li>
</ul>
<h2>Remote Filesystems</h2>
<figure>
<img src="images/bionic-review/remote_filesystems_640.png">
</figure>
<p>At home, I have a few NAS filesystems that I use regularly. They can be accessed as CIFS shares or via SSHFS. I'd like at least one of these to work.</p>
<ul>
<li>I want to be able to go into the OS file manager and tell it to mount a particular CIFS share or SSHFS filesystem over the network with minimal fuss.<ul>
<li><span class="text-success">Pass.</span> You click "Other Locations" in the file manager and then enter a URI for the remote filesystem. All popular remote file systems look to be supported out of the box.</li>
</ul>
</li>
<li>I should be able to save the settings for the remote filesystem such that after a reboot, I only need to click on one thing in the file manager to open the remote filesystem again. (And not be prompted for a password, etc.)<ul>
<li><span class="text-success">Pass.</span> You have to add a bookmark to the filesystem and when entering your credentials for the first time, it makes the offer to save them just for the session or all eternity.</li>
</ul>
</li>
<li>The file manager should not get unduly confused and ornery whenever the remote filesystem goes away because the network disappears, the laptop has gone to sleep, etc.<ul>
<li><span class="text-success">Pass.</span> I did not test this extensively but I suspended the machine and the share was perfectly browseable without complaint some time later.</li>
</ul>
</li>
</ul>
<h2>External Media</h2>
<p>I mean, I know how to mount things on the command line but it's nice if the file manager can do all the boring bits for simple cases like SD cards and USB drives.</p>
<ul>
<li>When I insert an SD card or USB drive, the OS should automatically mount the thing and open a file manager to it.<ul>
<li><span class="text-success">Pass.</span> I plugged in an external USB disk with an XFS filesystem and it mounted it up no-questions-asked. No window popped up, but the icon appeared on the desktop and it shows up as a removable drive in the file manager.</li>
</ul>
</li>
<li>When I'm done with some external media, I should be able to click on an eject button somewhere in the file manager to umount it.<ul>
<li><span class="text-success">Pass.</span> No visits from the drama llama this time.</li>
</ul>
</li>
</ul>
<h1>Things That Irked Me</h1>
<p>All of the scroll bars are about 5 pixels wide and often disappear entirely even when there's multiple pages of stuff. Call me old-fashioned but I like to be able to see where I am in a body of vertical content even when it's standing still.</p>
<p>There's a weird slow-scrolling effect when you grab a scroll bar and try to drag it. Often, instead of scrolling down at the rate the mouse is moved, it starts scrolling much more slowly than I'm expecting and the mouse ends up hitting the bottom of the screen before the content has scroll to the bottom. When it decides to do this is very unpredictable. I'm sure someone thought it was a good idea but this is a major usability fail just because of how unpredictable it is. I wouldn't mind some other way to scroll more slowly, but it must be more obvious and predictable.</p>
<p>The blatant Amazon advertising. Removing the giant "A" logo from the panel is a simple matter of right-clicking but it's not at all obvious how to get rid of it from your applications menu. (And also, Amazon is a web site not an application so it arguably shouldn't be there anyway.)</p>
<p>When the display goes to sleep due to lack of input or whatever, you have to drag upwards with your mouse to unlock the screen. Like some common dirty frickin' smart phone. And unfortunately, this is not obvious. After a few minutes of my second-favorite hobby, Keyboard Mashing Time, it turns out that the Esc key (and no other) will clear it as well.</p>
<figure>
<img src="images/bionic-review/lock_screen_640.png">
<figcaption>I'm not a smart phone, but I play one on TV.</figcaption>
</figure>
<p>Both the mouse and touchpad default to "Natural Scrolling," an abomination invented by Apple who thought that every input device that humans can lay their greasy little hands on should behave like an iPhone. I spend every day in a state of permanent quixotic hope that eventually humanity will comes to its senses and realize that computers and mobile devices are different kinds of technology with different purposes and different usage patterns.</p>
<p>Dragging a window to the top of the screen maximizes the window. FOR GODS SAKE WHY. There's a perfectly good Maximize button on every damn window for this purpose. Now in order to put a window at the top of the screen (which I do often), I have to drag it up, watch it maximize, and then and then click the Unmaximize button. Other desktops do this too for some reason, but I can usually disable it in them. But not Ubuntu. Is this a Windows/Mac thing? I can't figure it out.</p>
<h1>Things That Puzzled Me</h1>
<figure>
<img src="images/bionic-review/update_lies_640.png">
<figcaption>Ubuntu Software, a.k.a. Pack of Lies</figcaption>
</figure>
<p>Even after clicking the refresh button, the Ubuntu Software center always tells me, "Software is up to date." But this is evidently not true because if I run "apt update", I am generally told there are updates. My assumption for now is that the Ubuntu Software application only shows updates for packages that were installed through it. Further, the list of installed applications that it shows is very clearly a subset of those that are actually installed on the system. Color me befuddled.</p>
<p>The official Ubuntu repositories are quite often dog-slow for reasons unknown. Busy anthropods like myself don't have time to waste on this kind of nonsense. In order to get any kind of reasonable speed, I had to switch to a third-party mirror. (Thank you, <code>http://mirror.math.princeton.edu/pub/ubuntu</code>.</p>
<p>If you have multiple monitors, you can only set the background of both. You can't set them independently. I guess the developers thought that having different backgrounds on different displays would have been too confusing for users and cause their brains to explode.</p>
<p>And while we're on the topic of aesthetic issues, I still can't work out who at Ubuntu thinks that bright orange and purple is a reasonable color combination for a user interface. Not only is there no way to change this, there doesn't seem to be any way to adjust <em>anything</em> at all relating to the look-and-feel of Ubuntu. Not colors, not widgets, not even fonts. If orange everywhere makes your eyes bleed, you'd better stock up on the tissues now.</p>
<h1>Pleasant Surprises</h1>
<figure>
<img src="images/bionic-review/nextcloud_account_640.png">
<figure>
Ubuntu offers intergration with a number of services, among them Nextcloud. If you haven't heard of it, [Nextcloud](https://nextcloud.com/) is a file and productivity server. It has a number of different "Apps" but I use it mainly as a calendar server. When I put in my Nextcloud credentials and lauched the Calendar program, a perfectly serviceable calendar appeared with all of my events on it. That's pretty cool. The other Nextcloud services worked fine as well.
# Conclusion
It's obvious that a lot of work and polish went into this release. Although no Linux-based desktop OS has yet been able to wrest much market share from Windows and Mac OS, I'd say within the last ten years it's at least moderately popular among software developers and other technology-centric folk. I applaud Canonical for being part of the reason this is true. They also get a lot of credit for supporting tons of ancilary open source projects along the way, including actively encouraging spin-offs of their OS.
The Bionic Beaver release of Ubuntu is actually pretty solid, truth be told. Although it turns out that the basic design of the window and desktop management completely prevent me from switching away from Xubuntu, I think it's a fine choice for a lot of users. To get all cliche about it: sorry Ubuntu, it's not you, it's me.Linux Neckbeard Shocked, A Newfangled Code Editor that Doesn't Suck2017-10-29T00:00:00-04:002023-12-10T00:49:37-05:00Charlestag:None,2017-10-29:articles/2017/October/linux-neckbeard-shocked-a-newfangled-code-editor-that-doesnt-suck.html<figure>
<a href="images/vscode/vscode-01.png">
<img src="images/vscode/vscode-01-640.png">
</a>
</figure>
<p>It was a dark and stormy night. My eyes were glued to the screen as I watched
the <a href="https://www.ansible.com/">Ansible</a> playbook make its way through the
myriad configuration changes across dozens of production hosts. Of course all
of this had been tested in staging but if you work in technology, you …</p><figure>
<a href="images/vscode/vscode-01.png">
<img src="images/vscode/vscode-01-640.png">
</a>
</figure>
<p>It was a dark and stormy night. My eyes were glued to the screen as I watched
the <a href="https://www.ansible.com/">Ansible</a> playbook make its way through the
myriad configuration changes across dozens of production hosts. Of course all
of this had been tested in staging but if you work in technology, you have
more than a passing acquaintance with Old Man Murphy and his insipid law. This
is why my knuckles had turned white, poised over the <code>Ctrl</code> and <code>C</code> keys
waiting to abort the play in a hurry if I really had to... and if there was
enough time.</p>
<p>And then, inevitably, the moment I feared most had arrived. There was red.
Lots and lots of red splattered across the display, illuminated by the
display's backlight like some grotesque horror show. The scrolling came to a
dramatic halt while my breathing did the same. I scanned the errors, realized
what a fool I had been all this time (for at least the 12th time that day),
corrected a typo, and ran the play again. Successfully, no thanks to that
Murphy bastard.</p>
<p>The problem was caused by the fact that most changes to our Ansible playbooks
tend to span multiple files because we follow the recommended
<a href="http://docs.ansible.com/ansible/latest/playbooks_reuse_roles.html">role-based directory structure</a>.
Which means having multiple files open simultaneously and switching between
them as changes are made. This, as demonstrated by the not entirely fictional
account above, makes the whole process prone to error. It's growing
increasingly rare these days that I don't work on a project involving
multiple files and clearly my workflow is failing me.</p>
<p>I wouldn't call myself a <a href="https://vim.sourceforge.io/">vim</a> fanboy. It's just
that vim is what I use. It's on virtually every host I'm likely to touch and
it speeds up the process of editing code and config files so tremendously
that I would be lost without it. The main problem with vim is that everything
beyond basic text editing is pretty clumsy. Everything else that's handy to
have in a ~~text~~ code editor has a learning curve, or is clumsy or annoying
to use, or both. That includes many of the more full-feature extensions, and
the graphical variants of vim.</p>
<p>For projects involving multiple files, lately I've been muddling through with
some combination of terminal tabs, <a href="https://en.wikipedia.org/wiki/GNU_Screen">GNU
Screen</a>, and vim's "tabs." This
works a little better than one would expect but it's still easy to get
temorarily lost when my brain is busy chugging away at a more important
problem.</p>
<p>And then it occurs to me that I've been trying to cobble together my own IDE.
Previous to this revelation I have never used--or thought I needed--an IDE. I
know that there are oodles of IDEs on Linux but my brief exposure to them in
the past informed me that they generally take a long time just to load, take
even longer to set up properly, and just come with way, <em>way</em> more stuff than
I'll ever use. Oh, and I can't stand clutter. All I really want out of an IDE
is this:</p>
<ul>
<li>Integrated vim (because I can't be productive in anything else),</li>
<li>the ability to have multiple tabs open,</li>
<li>and a bare-bones file manager for finding and selecting files to edit.</li>
</ul>
<p>Everything else lies somewhere on the contiuum between fluff and
brain-damaged anti-features. Gvim doesn't quite fit the bill due to lack of a
simple file manager that can stay put and its tabs that aren't really tabs.
Plus if you have to exit vim (or shut down your computer), the whole
arrangement is lost. Most other IDEs and editors get shot down either by not
supporting vim-style editing or not being open source.</p>
<p>Now, I wouldn't call myself a luddite but a lot of newfangled stuff doesn't
impress me. In many cases, a surprising amount of stuff that hits the front
pages of Hacker News and subreddits is just some old idea repackaged with
gushing praise for itself and support for emoji. So when
<a href="https://atom.io/">Atom</a> made a big splash my first thought was, "great,
they're reinventing the text editor but this time in Javascript." I remember
running it once just to see what all the hype was about but recall it being
slow and unimpressive.</p>
<p>Well fast-forward a few years and Javascript is practically a first-class
citizen on the desktop today. It has a mature community, fast interpreters,
and lots of libraries. Some of my co-workers are using these
newfangled Javascript-based editors so I thought I would give them a whirl.
Atom started the movement but saw such success that it was quickly followed
by several work-alikes, also written in Javascript. Probably the most
well-known contenders would be <a href="http://brackets.io/">Brackets</a> and <a href="https://code.visualstudio.com/">Visual
Studio Code</a>.</p>
<p>It started out purely as idle curiosity, I swear. For one, I was amazed that
Microsoft had an open-source code editor, hosted on GitHub no less, rather
than just trying to bundle it with with their other dev tools. When I saw that
they offered Linux packages prominently on their Download screen, I was
positively intrigued. I just <em>had</em> to install it into a VM and give it a spin,
if for no other reason than to grin at the ensuing train wreck and shake my
head while making clucking sounds.</p>
<p>This is (approximately) what greeted me:</p>
<figure>
<a href="images/vscode/vscode-02.png">
<img src="images/vscode/vscode-02-640.png">
</a>
</figure>
<p>The first thing that I wondered was whether this thing can do vim well enough
to hold at bay my chair-throwing tendencies when a tool doesn't work the way I
want. I installed the
<a href="https://marketplace.visualstudio.com/items?itemName=vscodevim.vim">Vim</a>
extension and gave it a test run. To my surprise, everything that I used on a
routine basis worked fine. It looked like this was off to a promising start.</p>
<p>When I first started using VSC I didn't really know what I was doing so I
opened my home directory as a folder. This turned out to be a mistake. For one,
this causes VSC to spawn a process that crawls the whole folder in order to
index the text. My home directory is relatively huge due to the nature of my
work and this positively hammered the disk as well as ate up a whole CPU core.
This actually caused me to write-off VSC as an ill-performing hunk of garbage
for longer than I care to admit. Ever since I figured out that opening a
folder dedicated to a single project is the right way to do it, things have
been much smoother.</p>
<p>It turns out that VSC is actually
<a href="https://code.visualstudio.com/docs">rather well documented</a>, so there's
no point in me rehashing all of its features and whatnot here. So I'll just
mention a few of the things that tick my boxes:</p>
<p><strong>Integrated vim</strong>. Basically, all of the important things I routinely do in
vim work in VSC.</p>
<p><strong>Integrated file manager</strong>. It works pretty much exactly as I would expect
with no surprises. To spruce it up just a bit, I also installed the
<a href="https://marketplace.visualstudio.com/items?itemName=robertohuertasm.vscode-icons">vscode-icons</a>
extension.</p>
<p><strong>Tabs</strong>. Tabs works as you'd expect. When you have the vim extension installed,
you can even switch tabs with <code>gt</code>, although that stops working when you land
on a non-vim tab.</p>
<p><strong>Performance and stability</strong>. It opens instantly and is always quick to
respond when typing. I haven't had it crash on me yet, that I can recall.
Good enough for me.</p>
<p><strong>Integrated terminal</strong>. I don't have a problem using an external terminal
but the built-in one is nice to have.</p>
<p><strong>Ubuntu/Linux friendly</strong>. The VSC package can be installed on Ubuntu where it
will be automatically updated alongside the usual <code>apt update && apt
dist-upgrade</code> routine.</p>
<p><strong>A vibrant extension community</strong>. It's surprising how many extensions there
are. I personally only use a handful of them.</p>
<p><strong>Saves your work</strong>. If you close VSC and come back to it, it pops right back
up where you were, with all your windows, tabs, and changes intact. Cool.</p>
<p>I've been using VSC for a while now and it has really grown on me.
I really like how it gets out of the way and lets me get my work done. However,
there are just a few things that worrry me or that I think could be improved:</p>
<p><strong>Minimap enabled by default</strong>. The minimap is this icon-like view of the
whole file in one vertical bar. It's too small to read any of the text but it
shows sort of a vidual outline of the file. This is pretty worthless to me
since it takes up quite a lot of real estate relative to the value it
provides. Also, I believe that if you're working on a file that's becoming
too big to easily navigate, that's a good sign that the file needs to be
broken up anyway.</p>
<p><strong>Non-unix newline handling</strong>. I filed
<a href="https://github.com/Microsoft/vscode/issues/35181">a bug</a> about this but so
far, the devs don't seem to think its much of a problem.
Basically, this is a holdover from the editos's Windows roots. On Unix, all
lines in a file end in a newline, including the last line. Vim and all other
Unixy text editors automatically put a newline on the end of a file but don't
show it to you. There is a setting to work around this (see the bug), however
it still shows an empty line at the bottom of the editor and that annoys me.</p>
<p><strong>Integrated terminal as a separate pane</strong>. The integrated terminal can only
live at the bottom of the window. You can't open one as a tab next to your
other editor tabs, which I would prefer. And while can you have multiple
terminals going at the same time, you have to switch between them via a
drop-down. Not tabs, unfortunately.</p>
<p><strong>Feature creep</strong>. VSC already does an amazing amount of things, but (and this
is crucial) they mostly stay out of your way when you don't need or want
them. This is in stark contrast to most IDEs that try to shove all the
features into your face to show you how awesome they are. I worry that if the
VSC devs keep trying to add more and more features that the code base
will become bloated, slow, and hard to maintain if they don't draw a line in
the sand somewhere.</p>
<p><strong>Nagging</strong>. VSC nags you when you're not using the latest version. This drives
me a little crazy. Not so crazy that I've looked up the setting to disable it,
but still.</p>
<figure>
<a href="images/vscode/vscode-03.png">
<img src="images/vscode/vscode-03-640.png">
</a>
</figure>
<p>I have been using Linux both personally and professionally for basically all
of my adult life and have actively avoided Windows and other Microsoft
products because they represented everything that I saw as wrong with
proprietary, commercial software. But I guess we're seeing a kinder, gentler
Microsoft or something these days. VSCode is excellent and the fact that I'm
voluntarily running an open-source Microsoft product on my Linux machine for
day-to-day work is still pretty weird whenever I think about it.</p>Raspberry Pi Serial Console in Linux2012-06-23T18:05:00-04:002023-12-10T00:49:37-05:00Charlestag:None,2012-06-23:articles/2012/June/raspberry-pi-serial-console-in-linux.html<figure>
<img src="images/raspberry-pi-serial-console-in-linux/rpi_serial_main.JPG">
</figure>
<p>My <a href="http://www.raspberrypi.org/">Raspberry Pi</a> arrived in the mail earlier this week. However, work, family, and other commitments meant that tinkering with it had to wait until the weekend. Until today, all that I managed to accomplish was to download a Debian-based OS image, flash it to an SD card, hook the …</p><figure>
<img src="images/raspberry-pi-serial-console-in-linux/rpi_serial_main.JPG">
</figure>
<p>My <a href="http://www.raspberrypi.org/">Raspberry Pi</a> arrived in the mail earlier this week. However, work, family, and other commitments meant that tinkering with it had to wait until the weekend. Until today, all that I managed to accomplish was to download a Debian-based OS image, flash it to an SD card, hook the board up to my TV, and squeal with joy as it booted.</p>
<p>Now I'm ready to dig into this diminutive computer and see what's going on inside, but I have a slight problem. The Pi has only HDMI and composite for video output and it turns out that I have nothing in my office which can display video from either of these. (You can supposedly buy an HDMI to DVI adapter, but I don't yet have a DVI-capable monitor either.) This basically leaves me with two choices: Log into the Pi via SSH over ethernet or connect a serial console.</p>
<p>SSH would work just fine in theory, but there are two major drawbacks:</p>
<ol>
<li>You don't get any feedback on the boot process. If the operating system doesn't come up far enough to configure the ethernet port, give it an IP address, and then run the SSH daemon, I'll have to go and hook it up to something else to debug it. Which is clearly obnoxious.</li>
<li>The Debian image that I installed doesn't run an SSH daemon by default.</li>
</ol>
<p>So that leaves us with the serial console. The Pi has two rows of headers for general-purpose IO (GPIO). Three of these pins double as a serial port: ground (GND), receive (RX), and transmit (TX). The protocol is the same one spoken by standard 9-pin RS-232 ports on PCs, but <strong>you cannot connect them directly to a PC's serial port</strong> because the voltage levels are different. You'll basically fry your shiny new uber-cheap Linux board. And it wouldn't even make a very serviceable doorstop.</p>
<figure>
<img src="images/raspberry-pi-serial-console-in-linux/rpi_serial_cable.JPG">
</figure>
<p>You need a special cable with a bit of circuitry in it to do the level conversion. I happen to have one that I bought off eBay for a couple bucks to do wifi router hacking. It's just a cell phone data cable which I spliced a CD-ROM audio cable connector onto. Finding the right cable can be tricky since these aren't as common anymore. Just make sure you get one that converts to 3.3V, not 5V. The <a href="http://wiki.openwrt.org/doc/hardware/port.serial">OpenWRT wiki</a> has some suggestions. If you don't want to hunt around, just buy <a href="https://www.adafruit.com/products/70">this one from Adafruit</a>. Either way, you'll probably have to do some wire-splicing since there is no standard connector or pin arrangement for this.</p>
<p><a href="http://elinux.org/File:GPIOs.png">This image</a> shows the pinout for the Pi's GPIO header. The serial cable is connected to pins 6 (GND), 8 (TX), and 10 (RX). Remember that you have to connect the cable's TX wire to the Pi's RX pin, and the cable's RX wire to the Pi's TX pin.</p>
<p>Once the hardware is sorted, the rest is easy. Just plug the cable into your computer. Run the dmesg command to see how your system recognized the level converter. In my case, these were the relevant messages:</p>
<div class="highlight"><pre><span></span><code>[125827.544373] usb 1-6.1: new full-speed USB device number 9 using ehci_hcd
[125827.663087] usbcore: registered new interface driver usbserial
[125827.663120] USB Serial support registered for generic
[125827.663202] usbcore: registered new interface driver usbserial_generic
[125827.663208] usbserial: USB Serial Driver core
[125827.665650] USB Serial support registered for pl2303
[125827.665717] pl2303 1-6.1:1.0: pl2303 converter detected
[125827.667643] usb 1-6.1: pl2303 converter now attached to ttyUSB0
[125827.667684] usbcore: registered new interface driver pl2303
[125827.667689] pl2303: Prolific PL2303 USB to serial adaptor driver
</code></pre></div>
<p>This is showing that the kernel recognized the device, set up the pl2303 driver, and then attached it to the character device /dev/ttyUSB0. In most cases, this will be the device you'll see too.</p>
<p>I previously used minicom to talk to serial ports, but recently found out that GNU screen works just as well. Just run this command as root, power up the Raspberry Pi, and away you go:</p>
<div class="highlight"><pre><span></span><code>screen /dev/ttyUSB0 115200
</code></pre></div>
<p>(Depending on how your Linux distribution and account are set up, you may be able to use sudo or add your account to the dialout group.)</p>
<p>If all goes well, you'll see a flurry of kernel messages scroll by, followed by a login prompt. Success!</p>
<figure>
<img src="images/raspberry-pi-serial-console-in-linux/rpi_serial_console.png">
</figure>
<h2>F.A.Q.</h2>
<p><strong>Q</strong>: Holy Moses, this doesn't work at all!</p>
<p><strong>A</strong>: Probably the trickiest part in all of this is connecting the serial cable to the right pins on the Raspberry Pi. The good news is, you aren't likely to blow anything up by connecting them the wrong way. With only three pins, there aren't that many different combinations, so just try them all. The ground wire is probably going to be easiest to find, so try to connect that one first and then you have only two combinations to try.</p>
<p>Another strategy (one I have not tried) might be to plug the Raspberry Pi's power cable into your PC's USB port or a powered hub. (The docs warn against this but if you don't have any peripherals connected to the Pi and aren't running any CPU-intensive programs, you should be fine.) This connects the Pi's signal/power ground to the computer's signal ground so you then only have to worry about the RX and TX pins. Once you've figured them out, go back to powering your Pi with a cell phone or USB charger.</p>OpenSSH: The Poor Man's SOCKS Proxy2009-01-21T01:17:00-05:002023-12-10T00:49:37-05:00Charlestag:None,2009-01-21:articles/2009/January/openssh-the-poor-mans-socks-proxy.html<p>Just when I think I know everything I need to know about <a href="http://openssh.org/">OpenSSH</a>, I end up learning something new and tremendously useful. Today, that would be the -D argument.</p>
<p>Many times I have been stuck on an "untrusted" Internet connection and need to log in (insecurely) to a certain site …</p><p>Just when I think I know everything I need to know about <a href="http://openssh.org/">OpenSSH</a>, I end up learning something new and tremendously useful. Today, that would be the -D argument.</p>
<p>Many times I have been stuck on an "untrusted" Internet connection and need to log in (insecurely) to a certain site. My university, for example, uses a system that has no way of logging in via HTTPS, nor does it secure the traffic to and from the browser. I have moderate faith that the folks at my ISP aren't snooping my traffic (since I know the company pretty well and used to work with them), so I don't have a huge problem logging into their site at home. I also have a colocated server at the web hosting company I work for, so I know the layout of their network even better and trust them not to snoop or interfere with my traffic. But when I'm on the road connected to some dodgy insecure hotel wifi, I acquire no small amount of anxiety over the fact that anyone with a packet sniffer can get access to all of my personal and academic details.</p>
<p>For the past few years, I've had this plan to get <a href="http://openvpn.net/">OpenVPN</a> set up for my network and laptop so that I can always have a secure connection to my home and colocated server. And for the past few years, I've kept putting it off. While OpenVPN is easier to use than many other VPN solutions I could name, it's still at least a good hour of my time getting all the settings right and testing it out.</p>
<p>I was already aware of OpenSSH's -L option which simply forwards a local port through an SSH tunnel to a port on the remote machine. Very handy when you want to connect surely to a site hosted on that server and happen to have a shell account on it. But to do much more than that ranges from the complex to impossible. This is where -D comes in.</p>
<p>The -D arg tells OpenSSH to be a <a href="http://en.wikipedia.org/wiki/SOCKS">SOCKS</a> proxy. So you simply log in to the endpoint via SSH with the -D arg like:</p>
<div class="highlight"><pre><span></span><code>ssh -D 1234 user@host.example.com
</code></pre></div>
<p>And then tell your web browser to use a SOCKS v5 proxy on localhost at the specified port and bingo, you have a secure connection to your endpoint. In fact, any application with SOCKS support can have its traffic routed through the SSH tunnel via SOCKS. Firefox supports SOCKS just fine, Opera doesn't. Konqueror is supposed to, but judging from the Google responses I got, support might be a little flaky.</p>
<p>The final test was whether I'd be able to use this newfangled (to me) proxy method on my Nokia N800, a device that I browse and email with quite often whilst traveling. Obviously OpenSSH has to be installed as it doesn't come with the firmware. And the N800's web browser, MicroB, uses the Gecko engine. The UI has no widgets for entering a SOCKS proxy, but you can set the preferences manually with about:config:</p>
<div class="highlight"><pre><span></span><code>network.proxy.socks localhost
network.proxy.socks_port 1234
network.proxy.type 1
</code></pre></div>
<p>The result? Portable proxy surfing!</p>Linux Terminal Speed Benchmarks2008-10-27T02:54:00-04:002023-12-10T00:49:37-05:00Charlestag:None,2008-10-27:articles/2008/October/linux-terminal-speed-benchmarks.html<p>In system administration, you spend a lot of time typing into and reading back information from a terminal. Although all terminals pretty much do the same thing, they can differ somewhat in their UI features or which desktop they were designed to be integrated into.</p>
<p>A few years back I …</p><p>In system administration, you spend a lot of time typing into and reading back information from a terminal. Although all terminals pretty much do the same thing, they can differ somewhat in their UI features or which desktop they were designed to be integrated into.</p>
<p>A few years back I was doing a lot of compiling (Gentoo, FreeBSD) and I felt that a good deal of that time was spent just waiting for the terminal to print the enormous amount of compiler cruft to the screen. So I did some quick benchmarks. I don't remember the exact results of those benchmarks nor if I actually made a decision based on them but I clearly remember that results were interesting.</p>
<p>The topic of terminal speed came up at work today so I set out to replicate the experiment. Creating a benchmark like this is harder than it sounds because every time a single a character is printed in a graphical terminal, code is being run in the Linux kernel, numerous places in X, the video card driver, the command shell (bash), and the application running the benchmark itself and even the raw performance of the video card itself can come into play. To design the perfect graphical terminal benchmark, you'd need deep knowledge of how all of those work and carefully craft the benchmark so as to maximize the "stress" on the graphical terminal code while minimizing "stress" on the other components of the system.</p>
<p>However, I'm far too lazy for all that.</p>
<p>So I just catted a <a href="http://www.kernel.org/pub/linux/kernel/v2.6/ChangeLog-2.6.23">Linux kernel changelog</a> to the screen. Each benchmark was run four times times sequentially and the time averaged among the last three trials. (The first is a dry run to ensure that the file is cached in memory.)</p>
<div class="highlight"><pre><span></span><code>Terminal time cat ChangeLog-2.6.23
-----------------------------------------
xfce4-terminal 11.109
gnome-terminal 11.022
terminator 10.878
xterm 7.320
konsole 3.191
rxvt 2.983
</code></pre></div>
<p>I was rather expecting rxvt to win since it's widely regarded as the minimalist terminal, but Konsole was a surprise. It beats even xterm by a large margin. Like KDE, Konsole is almost certainly written in C++, widely regarded as slower than C which is what makes these results pretty interesting. It's also noteworthy that the xfce4 terminal is right on par with the Gnome terminal when XFCE is supposed to be more lightweight than Gnome. (And probably is, overall.) Based on these figures, one could speculate that terminator, xfce4-terminal, and gnome-terminal are all based on similar code or libraries.</p>
<p>And finally, just in case you skipped the part above where I said how poorly this "benchmark" was really constructed, I want to emphasize it again: This benchmark is completely unscientific. This is how these terminals did on my computer. You may get a different (even perhaps contradictory) set of results if you run them on your computer. Nevertheless, I'm fairly confident that the results here are representative of what most people will see.</p>