<?xml version="1.0" encoding="UTF-8"?> <rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"> <channel> <title>Bernardo Donadio</title> <description>Infrastructure specialist, IT automation engineer and 3D printing enthusiast.</description> <link>https://bcdonadio.com/</link> <atom:link href="https://bcdonadio.com/feed.xml" rel="self" type="application/rss+xml"/> <pubDate>Thu, 11 Sep 2025 10:14:03 -0300</pubDate> <lastBuildDate>Thu, 11 Sep 2025 10:14:03 -0300</lastBuildDate> <generator>Jekyll v4.4.1</generator> <item> <title>DevXperience: quando a baleia azul encalha</title> <description>&lt;p&gt;Fiz uma talk no DevXperience 2017 sobre os eternos bugs do Docker. Dê uma olhada!&lt;!--more--&gt;&lt;/p&gt; &lt;iframe width=&quot;560&quot; height=&quot;315&quot; src=&quot;https://www.youtube-nocookie.com/embed/rmE29KD9-gw&quot; frameborder=&quot;0&quot; allowfullscreen=&quot;&quot;&gt;&lt;/iframe&gt; &lt;p&gt;Se preferir, pegue os &lt;a href=&quot;/res/devxperience-baleia-azul/quando_a_baleia_azul_encalha.pdf&quot;&gt;slides da palestra em PDF&lt;/a&gt; para acompanhar melhor.&lt;/p&gt; &lt;p&gt;Lembre-se que apesar dessa ser uma talk falando de bugs do Docker, e no mais sendo negativo sobre a ferramenta, eu uso o Docker diariamente e advogo pela sua aplicação onde este já funciona bem. Containerização já revolucionou a computação, mas ainda temos que aparar algumas arestas para que possamos aplicá-la em qualquer lugar.&lt;/p&gt; &lt;p&gt;Feedback é muito bem-vindo!&lt;/p&gt; </description> <pubDate>Tue, 05 Sep 2017 03:00:00 -0300</pubDate> <link>https://bcdonadio.com/2017/devxperience-baleia-azul/</link> <guid isPermaLink="true">https://bcdonadio.com/2017/devxperience-baleia-azul/</guid> <category>sysadmin</category> <category>talks</category> </item> <item> <title>Writing a CV for an IT position</title> <description>&lt;p&gt;Here’s a quick list of Dos and Don’ts when writing resumés for an IT position. &lt;!--more--&gt;&lt;/p&gt; &lt;p&gt;&lt;img src=&quot;/images/cv/cv.jpg&quot; alt=&quot;CV header image&quot; /&gt;&lt;/p&gt; &lt;p&gt;I’m an engineer at &lt;a href=&quot;https://www.stone.com.br/&quot;&gt;Stone Payments&lt;/a&gt;, a brazilian credit card acquirer, and a big part of our workforce is composed of IT people. Also, I’ve entered a team in the company that was quickly growing, and therefore had to interview a lot of prospect new colleagues. While the People Team (that’s how the call the HR here) definitely does a great job of helping us to decide about the interviewed’s character and attitude, it cannot help us when determining their technical skills. Therefore, we, engineers, also have to do interviewing.&lt;/p&gt; &lt;p&gt;The result is that I noticed that a breathtakingly large sum of programmers/sysadmins don’t know how to write a proper curriculum for their area, relying instead on general tips found on job sites or nothing at all. So I decided to try to help those people on how to express their skills that sometimes passes under our radar, possibly losing valuable candidates in the process.&lt;/p&gt; &lt;p&gt;These tips not only will help you to write a curriculum that is great to read, but will also show you’re capable of prioritizing relevant information and make your CV stand out from the rest. We receive a lot of noise, and is not always easy to scoop up good applicants from the pile of CVs we receive daily. Help your prospect employer to choose you!&lt;/p&gt; &lt;p&gt;So, here’s it:&lt;/p&gt; &lt;h3 id=&quot;do&quot;&gt;DO&lt;/h3&gt; &lt;ul&gt; &lt;li&gt;Organize your jobs in reverse chronology, most recent first. List a few bullet points on each one about what you did. Write when you entered it and when you left it. Don’t write why: we will ask this in the interview.&lt;/li&gt; &lt;li&gt;Tailor your CV to show the most relevant skills to each prospect employer.&lt;/li&gt; &lt;li&gt;Troughly organize your resumé, and it’s probably better not to use Microsoft Word for this task. If you can do it well with Word, great. I personally cannot, and recommend to use a tool that makes a very clear distintiction between content hierarchy and formatting. We like and recommend LaTeX. Is very easy to distinguish a document made with LaTeX and shows you know how to deal with professional documents. It also shows you know how to treat documents as code, which is very important for a devops-cultured company.&lt;/li&gt; &lt;li&gt;Make a list of technologies/languages you’ve already worked with. Even though we don’t hire for knowledge on specific technologies (we prefer to teach), it’s easier to know what your expertise background is. It also helps when scanning trough a pile of CVs.&lt;/li&gt; &lt;li&gt;Put the links to (if you have): your Github, your blog, your Twitter. Github is a must for a programming position, keep a rich, well-organized and interesting Github profile. A blog is a plus, it shows you know how to express yourself and are communicative. If you normally tweet about your job or IT-related hobbies, also put your Twitter account. Be aware, thought, that most IT companies scoop their employees’ social medias for antisocial behaviour, like constant trolling, prejudice and, of course, crime-related talks.&lt;/li&gt; &lt;li&gt;Put the city you live on.&lt;/li&gt; &lt;li&gt;Put hobbies related to IT. We love seeing that the applicant is really passionate about the area they work in.&lt;/li&gt; &lt;li&gt;List your personal projects (IT-related) if you have any. In the interview itself we may ask for non-IT-related projects to get to know you better.&lt;/li&gt; &lt;li&gt;Use the PDF format for the resumé. In IT, people use a pletheora of different operating systems and office suites: the only format that looks uniform and is well parsed in all those is PDF.&lt;/li&gt; &lt;li&gt;Put your name and date in the filename.&lt;/li&gt; &lt;li&gt;Put a cellphone contact. This makes easier if we have to reschedule the interview if Skype crashes, Hangouts don’t open or any kind of comm technology don’t work (which is kind of the norm, unfortunately).&lt;/li&gt; &lt;li&gt;Write your email address. It makes easier to find you if we print the PDF to read with colleagues and then happen to not find your email in our mailbox.&lt;/li&gt; &lt;li&gt;Put your major degree if you have one, but my company normally doesn’t care about it. It looks pretty, though.&lt;/li&gt; &lt;li&gt;Put your technology certifications. We value those much more than a college degree.&lt;/li&gt; &lt;li&gt;Put the talks/workshops/demos/courses you may have ministered. It shows you can pass along your knowledge.&lt;/li&gt; &lt;li&gt;Describe the area you love the most in IT. It may be helpful to place you better on the team.&lt;/li&gt; &lt;li&gt;If your name isn’t generally associated with your gender, or you’re transgender, specify how you would like to be treated to avoid tight spots on the interview. Specially in languages like portuguese, it’s very hard to talk to people in a completely gender-neutral way. Be transparent and tell the employer how you like to be called. At companies that respects diversity, this will be very well received and both parties will have a nicer time when meeting in person.&lt;/li&gt; &lt;li&gt;Specify which idioms you know and how well you speak them (novice, intermediate, fluent, native or something of the sorts).&lt;/li&gt; &lt;li&gt;Use a common sans-serif font for titles, and a common serif font for the body. Never more than two fonts.&lt;/li&gt; &lt;li&gt;Put volunteer work you may have done: it shows you have energy and prone to kindness, exactly the kind of people we want around us.&lt;/li&gt; &lt;/ul&gt; &lt;h3 id=&quot;dont&quot;&gt;DON’T&lt;/h3&gt; &lt;ul&gt; &lt;li&gt;Put a photo: most IT people are ugly (myself included). Nobody likes to see that, haha. Also, it may trigger some racial bias (even though we try to avoid, it’s very hard to escape from it).&lt;/li&gt; &lt;li&gt;Write Curriculum Vitae, Resume or anything of the sort on the resumé. The title is your name.&lt;/li&gt; &lt;li&gt;Put your address: we really don’t care which part of the city you live, as long it is the same city we want you into.&lt;/li&gt; &lt;li&gt;Put your high-school name.&lt;/li&gt; &lt;li&gt;Put any kind of religion/politics views. This also helps the triggering of our unconscious biases, and may even show you don’t know how to separate your personal life from your work life. If your views explicitely forbids you from doing something work-related, like working on saturdays for example, mention it on the interview.&lt;/li&gt; &lt;li&gt;Use a childish font like Comic Sans. It makes you look immature.&lt;/li&gt; &lt;li&gt;Send the resume in a .doc, .docx, .odt or anything like that.&lt;/li&gt; &lt;li&gt;Put the phone contact of your mom, SO, or anything like that.&lt;/li&gt; &lt;li&gt;Use a stupid email name you created when you were a child, or from a prehistoric ISP domain.&lt;/li&gt; &lt;li&gt;Put your Facebook. Facebook is much more intimate than Twitter. We really don’t care about your interactions with your exes.&lt;/li&gt; &lt;li&gt;Put your marital status. We may ask it on the interview, but is just like a trivia to break ice. It’s not relevant to the job.&lt;/li&gt; &lt;li&gt;Put your age. This is really specific to IT. We really don’t care. The only exception is if you’re a minor, so there’s need to be a different kind of contract, so in this case mention it.&lt;/li&gt; &lt;li&gt;Write banal hobbies: like Netflix or sleeping. We normally ask about hobbies and things you do to space out in the interview and it’s completely OK to answer something like this to break the ice, but this doesn’t belong in the document.&lt;/li&gt; &lt;/ul&gt; &lt;p&gt;I hope these tips might help you on presenting yourself better. I will provide further tips soon on how to behave in the interview itself, another challenge on IT that isn’t quite what other job offers presents.&lt;/p&gt; &lt;p&gt;Stay tuned.&lt;/p&gt; </description> <pubDate>Sun, 03 Sep 2017 03:00:00 -0300</pubDate> <link>https://bcdonadio.com/2017/cv/</link> <guid isPermaLink="true">https://bcdonadio.com/2017/cv/</guid> <category>business</category> </item> <item> <title>P2K.co: Send Pocket articles to your Kindle</title> <description>&lt;p&gt;My Kindle was normally underutilized until I found this great freemium service. If you have a Kindle, you must read this! &lt;!--more--&gt;&lt;/p&gt; &lt;p&gt;&lt;a href=&quot;https://p2k.co/&quot;&gt;&lt;img src=&quot;/images/p2k/p2k.png&quot; alt=&quot;P2K header image&quot; /&gt;&lt;/a&gt;&lt;/p&gt; &lt;p&gt;Maybe Amazon is really bad at promoting books on their online store (I sincerely think they need to design their interface better and rethink their recommendation algorithms), maybe I’m just too picky. Anyway, the fact is that I bought a Kindle Paperwhite and have really not used it so often as I would like because of a simple problem: the things I wanted to read the most in a cool paper-like interface we’re not on Kindle Store, and Amazon doesn’t have a great open ecossystem where anyone can create apps for their devices.&lt;/p&gt; &lt;p&gt;So I went in search for a good service that would somehow link my bookmarks of assorted Medium articles that I would like to read later on but never reminded that I saved them in the first place. I found the excellent &lt;a href=&quot;https://p2k.co/&quot;&gt;p2k.co&lt;/a&gt; service by &lt;a href=&quot;http://emiraydin.com/&quot;&gt;Emir Aydin&lt;/a&gt; and discovered that it can do exactly this!&lt;/p&gt; &lt;p&gt;Basically, &lt;a href=&quot;https://p2k.co/&quot;&gt;p2k.co&lt;/a&gt; interacts with the &lt;a href=&quot;https://getpocket.com/&quot;&gt;Pocket&lt;/a&gt; API to retrieve your bookmarks, generate a custom &lt;a href=&quot;https://fileinfo.com/extension/azw&quot;&gt;AZW file&lt;/a&gt; with the content of those bookmarks plus a great index and Archive/Like buttons, and then send it directly to your Kindle trough your &lt;a href=&quot;https://www.amazon.com/gp/sendtokindle/email&quot;&gt;Kindle mail&lt;/a&gt; - which gets synced ASAP to your device in a push-fashion.&lt;/p&gt; &lt;p&gt;The great of it is that P2K aggregates all articles you bookmarked in the day and then send it at a specific time, and I found it great to do so just before the time I normally go to sleep, so I can catch on to the subjects I found interesting trough the day but had not the time to read.&lt;/p&gt; &lt;p&gt;It works just great with the Free plan, but I liked the service so much that even though I really don’t need most of the paid features, I subscribed to the USD$5,00 Plantinum plan to support the development.&lt;/p&gt; &lt;p&gt;If you’re anything like me, you’ll love this great service.&lt;/p&gt; </description> <pubDate>Mon, 28 Aug 2017 03:00:00 -0300</pubDate> <link>https://bcdonadio.com/2017/p2k/</link> <guid isPermaLink="true">https://bcdonadio.com/2017/p2k/</guid> <category>recommendations</category> </item> <item> <title>Redismoke: a Redis replication smoke test</title> <description>&lt;p&gt;I’ve written a small utility to smoke test a Redis replication set. It’s a simple Python script that should be run as a daemon to test if writes to a Redis master are being correctly replicated to the slaves. &lt;!--more--&gt;&lt;/p&gt; &lt;p&gt;The necessity of such script arised from the fact that the Redis Sentinel, tool that Redis uses to implement a kind of raft, isn’t very reliable and has the bad habit of stop working without much notice.&lt;/p&gt; &lt;p&gt;This utility is the work of a single afternoon so far, and it’s being currently worked on to easily integrate with a SolarWinds instance, just as it may be extended to work with other monitoring tools.&lt;/p&gt; &lt;p&gt;You can find the code in the Github repository below:&lt;/p&gt; &lt;p&gt;&lt;a href=&quot;https://github.com/stone-payments/redismoke&quot;&gt;https://github.com/stone-payments/redismoke&lt;/a&gt;&lt;/p&gt; &lt;p&gt;Patches are welcome. The code is distributed under the MIT license. The next features that I will implement will be:&lt;/p&gt; &lt;ul&gt; &lt;li&gt;Unit testing (sure, test the test all the way down)&lt;/li&gt; &lt;li&gt;SolarWinds format output&lt;/li&gt; &lt;li&gt;Single run&lt;/li&gt; &lt;li&gt;Separate the CLI from the lib code&lt;/li&gt; &lt;/ul&gt; &lt;p&gt;Maybe I will write someday about the hell that was creating an idempotent Ansible role for Redis…&lt;/p&gt; </description> <pubDate>Tue, 11 Jul 2017 22:00:00 -0300</pubDate> <link>https://bcdonadio.com/2017/redismoke/</link> <guid isPermaLink="true">https://bcdonadio.com/2017/redismoke/</guid> <category>development</category> </item> <item> <title>Authentication in the cloud</title> <description>&lt;p&gt;Authentication is hard. How to deal with it in a sane way in a cloud environment? There are two approaches to follow, each with its up and downsides, but which one is the best for your environment? &lt;!--more--&gt;&lt;/p&gt; &lt;p&gt;&lt;img src=&quot;/images/authentication-cloud/padlock.jpg&quot; alt=&quot;Padlock header image&quot; /&gt;&lt;/p&gt; &lt;h3 id=&quot;foreword&quot;&gt;Foreword&lt;/h3&gt; &lt;p&gt;If you’ve got more than a handful of systems, there’s the inescapable need to find a way to uniformize the authentication accross those. However, this is not an easy feat if you have the relatively common need of minimizing downtime to the maximum. So, not only you need to keep bad people out, you also need to always allow good people in, consistently. This decision should always be possible of being executed.&lt;/p&gt; &lt;p&gt;Therefore, you have two main approaches to this issue: centralized and distributed decision. This is a logical issue, not technological one, and is the first choice you must make to go down this path. Each comes with its own pros and cons, but there’s definetely one that fits best your environment. There’s, however, no one-size-fits-all. It ultimately depends on your personal and organizational needs.&lt;/p&gt; &lt;p&gt;From now on, I will present a high-quality example from each of those two domains. Those are battle-tested solutions, used by big-companies like Red Hat, Facebook, Netflix and so on. With these initial informations, I hope you will be able to at least have an initial idea of what you want, and will be able to kickstart your own research.&lt;/p&gt; &lt;p&gt;Also, I will be using in this article the term “authentication” to mean both authentication and authorization, as there’s no need in this article to treat differently those two problems.&lt;/p&gt; &lt;h3 id=&quot;centralized-authentication&quot;&gt;Centralized authentication&lt;/h3&gt; &lt;p&gt;Don’t be fooled by the &lt;em&gt;cloud hype&lt;/em&gt; these days that preach absolute decentralization and promises-like behaviour everywhere. If you have legacy systems, or your objective isn’t simply serving the biggest crowd of users you can possibly imagine, distributed authentication is a problem you don’t need to have.&lt;/p&gt; &lt;p&gt;In fact, Microsoft proved this with its Active Directory solution. It works really well, and meet all goals of a mid/small typical organization. Yes, integration with it isn’t the biggest of its strengths, but Red Hat solved this problem in the Linux domain for sure with its own product.&lt;/p&gt; &lt;p&gt;I will now present this lesser known product from Red Hat: the Red Hat Identity Management, backed from the FreeIPA open-source project. It works so well that you may be forgiven if you think that is a single, monolithic solution like the Microsoft AD. It is, on the contrary, composed of smaller open-source projects and standardized protocols that are already part of every POSIX system. Hence its easiness of integration: there are already a lot of libraries and tooling ready for it, just as your legacy system can be easily instructed to use it.&lt;/p&gt; &lt;p&gt;Also, unlike those previous tools like a hand-maintained OpenLDAP server or a Kerberos KDC, it is freaking easy to deploy. There’s absolutely no need for, &lt;em&gt;ugh&lt;/em&gt;, write LDIF files by hand or anything. It’s just as easy as setting up an AD domain-server, and even easier to connect clients to it.&lt;/p&gt; &lt;p&gt;The FreeIPA stack includes the following softwares:&lt;/p&gt; &lt;ul&gt; &lt;li&gt;389 LDAP server&lt;/li&gt; &lt;li&gt;MIT KDC server&lt;/li&gt; &lt;li&gt;ISC BIND DNS server&lt;/li&gt; &lt;li&gt;NTF NTPd server&lt;/li&gt; &lt;li&gt;Dogtag CA &amp;amp; RA server&lt;/li&gt; &lt;li&gt;Apache httpd server&lt;/li&gt; &lt;/ul&gt; &lt;p&gt;And all of it is centrally managed by the FreeIPA web interface, the FreeIPA CLI or the FreeIPA webservice API. There’s absolutely no need to touch directly any of those server’s config files.&lt;/p&gt; &lt;h4 id=&quot;server-side&quot;&gt;Server-side&lt;/h4&gt; &lt;p&gt;Provisioning your first FreeIPA server is a little bit different from most POSIX tools you already used, as you don’t really need to touch any config file. Basically you install the &lt;em&gt;ipa-server&lt;/em&gt; package with your package manager and then simply call the &lt;em&gt;ipa-server-install&lt;/em&gt; script, passing as arguments all infrastructure details you might find relevant at install phase. Aside from the main domain, you can easily change all those later with the server already up and running.&lt;/p&gt; &lt;p&gt;You can find a &lt;a href=&quot;https://www.freeipa.org/page/Quick_Start_Guide&quot;&gt;quickstart guide in the FreeIPA documentation&lt;/a&gt;.&lt;/p&gt; &lt;p&gt;From now on, almost all aspects of the FreeIPA deployment and identity content can be administered trough its web interface. Even a junior sysadmin that never touched FreeIPA before can do it. Very few advanced features may be managed trough the FreeIPA CLI, but even those are meant to be implemented in the web interface as soon as there’s developer power to do it.&lt;/p&gt; &lt;p&gt;Even though the FreeIPA server is centralized by nature, it can be distributed against multiple hot-hot FreeIPA servers, providing you with failure-tolerance in a intrisically centralized infrastrucure. The &lt;a href=&quot;https://www.freeipa.org/page/Deployment_Recommendations&quot;&gt;FreeIPA documentation about deployment recomendations&lt;/a&gt; show a nice graphic visualization of the supported deployment architecture intra and inter-datacenter.&lt;/p&gt; &lt;p&gt;&lt;img src=&quot;/images/authentication-cloud/freeipa-16.png&quot; alt=&quot;FreeIPA 16 server architecture&quot; /&gt;&lt;/p&gt; &lt;h4 id=&quot;client-side&quot;&gt;Client-side&lt;/h4&gt; &lt;p&gt;Just as the server side, you don’t need to configure manually any of the subsystems that comprise a full-featured FreeIPA client. All you have to do is install the &lt;em&gt;ipa-client&lt;/em&gt; package and run the &lt;em&gt;ipa-client-install&lt;/em&gt; script with minimal arguments, since most of the configurations the scripts already discovers automatically trough the DNS SRV entries added to your DNS systems by the FreeIPA server. Then, you just reboot. Really.&lt;/p&gt; &lt;p&gt;You have now added to the host entry in the FreeIPA server the machine’s SSH server key, just as the FreeIPA client shim was installed in both your SSHd and SSH client on this system, enabling you to trust automatically all other hosts’s key in your infrastructure, since FreeIPA keeps a record of it. Also, all users - as long as they’re authorized to do it - can already log in this system trough SSH with their SSH keys associated to their user entry, just as they can either use a password or even a combination of those with two factor authentication. Everything out-of-the-box.&lt;/p&gt; &lt;p&gt;The system DNS A and PTR entries were already added to the FreeIPA DNS server, so you can reach it just by knowing it’s name. The system DNS entries are automatically updated when it boots up, so you can also use dynamic DNS leases.&lt;/p&gt; &lt;p&gt;If your system provides a service that uses a SSL key, you can register in FreeIPA a SSL key to be automatically provided and kept updated trough its automatic renovations, enabling you to use very short-lived keys to enhance security. Provided that the Dogtag server used by FreeIPA is also used by many comercial Certificate Authorities (CAs) around the world, you can rest assured that the implementation of this x509 scheme is sane and trusted.&lt;/p&gt; &lt;p&gt;Finally, you can provide Single Sign-On for both users and servers with the Kerberos service provided by FreeIPA. Once you’ve called &lt;em&gt;kinit&lt;/em&gt; and typed your password (which may be combined with a 2FA token), then you can seamlessly log into webservices, SSH servers and every type of service that already supports Kerberos. This includes Windows. Oh yes, FreeIPA integrates with Windows clients and even can be a part of a Microsoft Active Directory domain.&lt;/p&gt; &lt;p&gt;This is all nicely tied up in the provisioning phase by the &lt;em&gt;ipa-client-install&lt;/em&gt; script, and in the runtime phase by the &lt;a href=&quot;https://fedoraproject.org/wiki/Features/SSSD&quot;&gt;SSSD client&lt;/a&gt;, so you don’t need to configure any of those manually.&lt;/p&gt; &lt;h4 id=&quot;disaster-situation&quot;&gt;Disaster situation&lt;/h4&gt; &lt;p&gt;The big question in authentication for sysadmins, however, isn’t about all the features that a given solution brings to the table, but what happens when shit hits the fan, everything stops working and you need to act fast. SSSD handles that for you, somewhat.&lt;/p&gt; &lt;p&gt;SSSD, just like Windows, does credentials caching in order to authorize access to the system even if it is unable to contact the server to do it online. Also, the credential cache system acts in order the speed things up and decrease the load on server. If you already accessed the system sometime ago, chances are you be able to do it again in the event of a disaster (connection down, FreeIPA server crashed or is otherwise unavailable).&lt;/p&gt; &lt;p&gt;The keyword in this sentence is, however, &lt;strong&gt;chances&lt;/strong&gt;. The SSSD client can’t authorize you if you already dropped out of its cache, or have never seen you. You don’t carry any authorization information with you other than your password/certificate. The server always has to reach the FreeIPA instance to ask about what you’re are or not allowed to do.&lt;/p&gt; &lt;p&gt;Also, this cache is wiped out every reboot, so in the event of a datacenter-wide power failure, you won’t be able to log in any of your systems before the FreeIPA server’s are up and the network is properly working.&lt;/p&gt; &lt;p&gt;In most companies, this can be easily accepted in face of such administration easiness. In a company like Facebook or something, this is unthinkable of.&lt;/p&gt; &lt;h3 id=&quot;distributed-authentication&quot;&gt;Distributed authentication&lt;/h3&gt; &lt;p&gt;Basically, Facebook was brought to the conclusion that using a centralized solution with their global scale, with them &lt;a href=&quot;http://blog.sqlizer.io/posts/facebook-on-aws/&quot;&gt;reaching almost a million active servers&lt;/a&gt;, is completely insane and much downtime-prone. Therefore, investigating quite a bit, it was possible to create a solution that met all their requirements with something that was already available off-the-shelf, but a little bit obscure so far.&lt;/p&gt; &lt;p&gt;I’m talking about the OpenSSH certificate system, that &lt;a href=&quot;https://code.facebook.com/posts/365787980419535/scalable-and-secure-access-with-ssh/&quot;&gt;Marlon Dutra so brilliantly explained in the Facebook’s Code blog&lt;/a&gt;.&lt;/p&gt; &lt;p&gt;This solution provides tight security controls, high availability and reliability and even high-performance with a simple trick: you don’t need to contact a login server at all. All the information needed to authorized an access is contained in the user certificate already, that the user itself provides when it wants to log into the system, and the system being accessed simply checks that the signature is a match with the CA trusted by the system.&lt;/p&gt; &lt;p&gt;There’s no SPOF here. You may even login in a machine that is completely isolated from the network by any reason, since the CA certificate is contained in its image (or programatically rotated every once in a while).&lt;/p&gt; &lt;p&gt;Maintaining such an infrasctructure, though, is a litte cumbersome: properly organizing and keeping a secure CA infrastructure is tough and costly, with no helping hands that a solution like FreeIPA might bring. But it can be done, provided you’ve reached the point where this becomes beneficial in terms of time and cost.&lt;/p&gt; &lt;h4 id=&quot;server-side-1&quot;&gt;Server-side&lt;/h4&gt; &lt;p&gt;Well, there’s no server side. There’s just a secure machine that you use to generate certificates, and that may very well be offline for enhanced security. You need, however, a simple webserver to provide the Certificate Revocation List (CRL) for the certificates that were compromised or otherwise put out of use.&lt;/p&gt; &lt;p&gt;Even this webserver, though, don’t need to be kept up all the time. As long as a client system have a recent enough version of the list, you are completely ok. Since this list is also signed, you may very well distribute it in various points of the network or even have clients to exchange those in a peer-to-peer fashion.&lt;/p&gt; &lt;p&gt;What you might want is a central way to audit those accesses, like a &lt;a href=&quot;https://www.graylog.org/&quot;&gt;Graylog&lt;/a&gt; or &lt;a href=&quot;https://hive.apache.org/&quot;&gt;Hive&lt;/a&gt; instance, enabling you to search across chronological data and find possible abuses or invasions. OpenSSH conveniently logs all informations about the certificate used to log into the system to the journal, therefore rendering the data extraction quite a bit simple. Even though, a log-system downtime isn’t going to impact your ability to access your system: the log is delivered in a best-effort fashion.&lt;/p&gt; &lt;h4 id=&quot;client-side-1&quot;&gt;Client-side&lt;/h4&gt; &lt;p&gt;In the distributed solution, that where’s the magic resides. By instructing your OpenSSH server to authorize users based on their certificate’s information after checking their signature, you can carry with the user all the info you might want.&lt;/p&gt; &lt;p&gt;You will need to configure the client to pipe all of its logs to your logging solution, but this problem was long solved with &lt;a href=&quot;http://www.rsyslog.com/&quot;&gt;rsyslog&lt;/a&gt; or more modern tools like &lt;a href=&quot;https://github.com/mheese/journalbeat&quot;&gt;journalbeat&lt;/a&gt;.&lt;/p&gt; &lt;p&gt;The downside is, however, that the authorization logic is partly kept in the client itself: you need a way to change this information when you find necessary, be it trough a configuration manager like &lt;a href=&quot;https://puppet.com/&quot;&gt;Puppet&lt;/a&gt; or &lt;a href=&quot;https://www.ansible.com/&quot;&gt;Ansible&lt;/a&gt; (which you should already be doing), or&lt;/p&gt; &lt;ul&gt; &lt;li&gt;probably like Facebook does - by cycling the whole VM image.&lt;/li&gt; &lt;/ul&gt; &lt;h4 id=&quot;disaster-situation-1&quot;&gt;Disaster situation&lt;/h4&gt; &lt;p&gt;As long as the CA and CRL are kept within their expiration dates, you will be able to log into the system without help from any external service. Even completely offline.&lt;/p&gt; &lt;p&gt;Also, there’s no performance bottleneck even with billions of simultaneous logins, after all you have no central place to log into. If you already used a undersized Microsoft AD deployment in a college campus or big company, you know what I’m talking about.&lt;/p&gt; &lt;p&gt;Revoking a key is simple as adding the key sum to the CRL, sign and publish it. The revocation will be distributed as fast as you configured your clients to fetch it. Although not so instantaneous as the centralized solution, it is very plausible to do so in face of the benefits brought by this choice.&lt;/p&gt; &lt;h3 id=&quot;the-third-way&quot;&gt;The third-way&lt;/h3&gt; &lt;p&gt;Netflix liked a little bit of both, and developed &lt;a href=&quot;https://github.com/Netflix/bless&quot;&gt;another solution called BLESS&lt;/a&gt;, which runs a CA by-demand, signing very short-lived SSH certificates (like 5 minutes or so) based on business rules.&lt;/p&gt; &lt;p&gt;While certainly this does solves the performance issue, certainly can’t be used as the only authentication mechanish in their infrastructure, as the BLESS system itself is run in the &lt;a href=&quot;https://aws.amazon.com/lambda/&quot;&gt;AWS Lamba service&lt;/a&gt;, and we know it is too complex to be easily relied on - just as the &lt;a href=&quot;https://www.theregister.co.uk/2017/03/01/aws_s3_outage/&quot;&gt;recent outage taught us&lt;/a&gt;.&lt;/p&gt; &lt;h3 id=&quot;afterword&quot;&gt;Afterword&lt;/h3&gt; &lt;p&gt;If there’s one piece of advice you should take from this article, it is that you should plan well your disaster possibilities. There’s very few worse sensations than knowing you’ve locked yourself out from your systems in the middle of an outage other than maybe like… &lt;a href=&quot;https://www.theregister.co.uk/2017/02/01/gitlab_data_loss/&quot;&gt;almost completely loosing all of your customer data&lt;/a&gt;.&lt;/p&gt; &lt;p&gt;However, you shouldn’t be paranoid to the point you may want to decentralize everything for the sake of it, and find yourself in a unentanglable mess of distributed logic with bad visibility.&lt;/p&gt; </description> <pubDate>Tue, 04 Jul 2017 02:00:00 -0300</pubDate> <link>https://bcdonadio.com/2017/authentication-cloud/</link> <guid isPermaLink="true">https://bcdonadio.com/2017/authentication-cloud/</guid> <category>sysadmin</category> </item> <item> <title>When the blue whale sinks</title> <description>&lt;p&gt;A bug in the Linux kernel is affecting thousands of people for more than 3 years, and so far there’s no complete fix. Continue reading to know what to expect when encoutering this issue (hopefully not) in production. &lt;!--more--&gt;&lt;/p&gt; &lt;p&gt;Let’s get started with the biggest PITA I’ve ever experienced with a software in production, that after &lt;a href=&quot;https://github.com/moby/moby/issues/5618&quot;&gt;more than 3 years of being reported&lt;/a&gt; (at least), is far from being completely fixed. This issue isn’t present in the Docker code itself, but rather in the Linux kernel code instead. It affects not only Docker, but any kind of software that uses the Linux network stack to create devices and namespaces frequently like LXC, OpenStack, Rkt, Proxmox, etc…&lt;/p&gt; &lt;p&gt;The tell-talle of hitting this bug is receiving a similar message of this every 10 seconds on the VT/syslog/journal of your server:&lt;/p&gt; &lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-text&quot; data-lang=&quot;text&quot;&gt;unregister_netdevice: waiting for veth1 to become free. Usage count = 1&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt; &lt;p&gt;Regarding the message variation, the interface may be any one that is currently being manipulated by Docker (the most frequent case is the &lt;em&gt;lo&lt;/em&gt; interface of the container), and the usage count may be larger than one.&lt;/p&gt; &lt;p&gt;If you’ve already had some experience with multi-thread programming, you can instantly diagnose this as a race condition, and even cringe on the memory of debugging those kind of issues. However, this issue affects so many people for so much time (more than 3 years) that you may think that it already got fixed, right? Nope.&lt;/p&gt; &lt;p&gt;Once the bug is triggered, the situation now is the following: you’re unable to create, delete or change any network device in the whole system, rendering Docker basically useless - as you need to do exactly this to create and delete containers. There’s absolutely no fix to the bug so far, except a few mitigations and the Windows-style workaround: reboot your computer. Seriously.&lt;/p&gt; &lt;p&gt;Worse even is the fact that reproducing this issue is far from straightforward. You hit this bug basically by creating and deleting containers frequently, but the frequency needed to hit the issue varies wildy. Some people simply don’t ever hit the bug, and others get systems frozen by this multiple times a day. The Red Hat kernel (including CentOS) seems to be specially more prone to suffer from this problem, but all other distributions have reports of also being affected. Ironically, the &lt;em&gt;sosreport&lt;/em&gt; tool used by Red Hat to collect information and logs on the system in order to diagnose the situation simply doesn’t works after the issue is triggered.&lt;/p&gt; &lt;p&gt;There are, however, partial mitigations to address the issue: you can put your virtual bridge &lt;em&gt;docker0&lt;/em&gt; in promiscuous mode to delay the interface teardown just a little as to not hit the race condition (the promiscuous mode has no other relation to the problem than this), or disable the IPv6 support in the kernel, hence it is more prone to hit the bug than the IPv4 stack.&lt;/p&gt; &lt;p&gt;As of the time of this publication, a fix has already been released to the issue being encountered by hitting the bug in the IPv6 stack, but people are still get the annoying message and frozen behaviour, suggesting that the bug has &lt;strong&gt;multiple causes&lt;/strong&gt;, spread all over the network subsystem of the kernel.&lt;/p&gt; &lt;p&gt;This particular fix was released in &lt;a href=&quot;https://github.com/torvalds/linux/commit/751eb6b6042a596b0080967c1a529a9fe98dac1d&quot;&gt;this linux 4.8 commit&lt;/a&gt; and backported for RHEL/CentOS on the kernel-3.10.0-514.21.1.el7 package on RHEL/CentOS, as you can follow in the &lt;a href=&quot;https://access.redhat.com/articles/3034221&quot;&gt;RHSA#3034221&lt;/a&gt; (RHN access needed).&lt;/p&gt; &lt;p&gt;A very fearsome, but also very possible, scenario is that if you have a PaaS system that auto-heals (like OpenShift or tsuru), you can unleash a chain reaction by triggering the bug on one system. When the auto-heal function takes care of starting the container on other machines, you can now trigger the bug again on those system. Then the thing grows. Exponentially.&lt;/p&gt; &lt;p&gt;Until this issue is completely fixed, I’m very cautious with using Docker for anything other than stateless applications with at least double redundancy.&lt;/p&gt; </description> <pubDate>Sat, 03 Jun 2017 19:40:00 -0300</pubDate> <link>https://bcdonadio.com/2017/when-the-blue-whale-sinks/</link> <guid isPermaLink="true">https://bcdonadio.com/2017/when-the-blue-whale-sinks/</guid> <category>sysadmin</category> </item> <item> <title>Yum repository for nginx with ALPN on EL7/6</title> <description>&lt;p&gt;Chrome needs ALPN to use HTTP/2, ALPN needs OpenSSL 1.0.2, RedHat will only ship OpenSSL 1.0.1 on RHEL 7, and nginx uses the OpenSSL provided by the system. Here’s the solution to this mess!&lt;!--more--&gt;&lt;/p&gt; &lt;p&gt;&lt;strong&gt;UPDATE 2&lt;/strong&gt;: The module &lt;a href=&quot;https://github.com/eustas/ngx_brotli&quot;&gt;ngx_brotli from eustas&lt;/a&gt; was added to the build to provide Brotli compression support. It is a fork from the original &lt;a href=&quot;https://github.com/google/ngx_brotli&quot;&gt;ngx_brotli from Google&lt;/a&gt;, with changes to build with recent nginx versions. Take a look into the &lt;a href=&quot;https://opensource.googleblog.com/2015/09/introducing-brotli-new-compression.html&quot;&gt;Brotli announcement&lt;/a&gt; to understand how it may help your pages load faster.&lt;/p&gt; &lt;p&gt;&lt;strong&gt;UPDATE:&lt;/strong&gt; EL7 was updated to OpenSSL 1.0.2 with the release of EL7.4, which provides proper ALPN support, and therefore there’s ALPN support out-of-the-box now on EL7. However, I will continue to be providing packages for EL7 if someone wants to keep using a built-in OpenSSL from the most recent stable branch. The situation with EL6 remains unchanged and it still needs custom packages to provide ALPN support.&lt;/p&gt; &lt;p&gt;TL;DR: Paste this (if you trust me) in your EL7 or EL6 system:&lt;/p&gt; &lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;curl &lt;span class=&quot;nt&quot;&gt;-s&lt;/span&gt; https://raw.githubusercontent.com/bcdonadio/nginx-alpn/master/repoinst_mainline.sh | &lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;bash &lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;yum &lt;span class=&quot;nb&quot;&gt;install&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-y&lt;/span&gt; nginx&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt; &lt;ul&gt; &lt;li&gt;Current nginx version: 1.15.4&lt;/li&gt; &lt;li&gt;Current OpenSSL version: 1.1.1&lt;/li&gt; &lt;/ul&gt; &lt;p&gt;Now, let me explain the situation a little bit. Google defined that after Chrome 51, released on May 31st of 2016, HTTP/2 connections would only be negotiated trough Application-Layer Protocol Negotiation (ALPN) instead of the Next Protocol Negotiation (NPN) used so far in TLS-enabled HTTP streams. The new protocol is great because it avoids some additional trips over the wire to negotiate the security parameters, reducing latency, but it needs to be supported on the server side.&lt;/p&gt; &lt;p&gt;Because of the Heartbleed vulnerability, RedHat is no longer doing backports of OpenSSL and other crucial packages, since on the original version of OpenSSL that was shipped with RHEL6 there was no presence of this vulnerability, but it was later introduced because of a stable backport. This creates the whole problem.&lt;/p&gt; &lt;p&gt;RHEL7 shipped with OpenSSL 1.0.1, and RedHat has no plans to update it - &lt;a href=&quot;https://bugzilla.redhat.com/show_bug.cgi?id=1276310&quot;&gt;as per this RFE ticket&lt;/a&gt; - until RHEL8, which is only scheduled to be launched on 2018. That’s two more years without ALPN support.&lt;/p&gt; &lt;p&gt;Therefore, neither Apache httpd nor nginx packages shipped on either the official RedHat repos, the Fedora EL repos nor even the packages available in the Apache and nginx own Yum repos are able to support ALPN, since they all use the OpenSSL dynamic library available on the target system.&lt;/p&gt; &lt;p&gt;Replacing only the OpenSSL package for a more recent version is not an option, since that would introduce an ABI incoherence between the library available on the system and the one that the applications expect. The application also needs to be rebuilt.&lt;/p&gt; &lt;p&gt;The good news is that since we need to rebuild the application anyway, we can easily instruct it to use a different OpenSSL library, but this comes with a whole new bag of problems: you need to keep track of both the OpenSSL and the application (nginx or Apache httpd) security bulletins, besides having to either create a new Yum repository yourself or building the packages in every system you manage.&lt;/p&gt; &lt;p&gt;As such, I decided to build those packages and make it available to the community, and the &lt;a href=&quot;https://packagecloud.io/&quot;&gt;packagecloud team&lt;/a&gt; was happy to host them.&lt;/p&gt; &lt;p&gt;The packages available at &lt;a href=&quot;https://packagecloud.io/nginx-alpn/mainline&quot;&gt;packagecloud&lt;/a&gt; are distributed either trough direct download, or trough a GPG-signed Yum repository. Also, I always sign the packages themselves with &lt;a href=&quot;/gpg&quot;&gt;my own GPG key&lt;/a&gt;. Therefore, &lt;em&gt;if you trust me&lt;/em&gt;, you can be certain that the package is safe.&lt;/p&gt; &lt;p&gt;Basically the script on the top of this post will do the following:&lt;/p&gt; &lt;ul&gt; &lt;li&gt;Ensure curl, sed and pygpgme are installed&lt;/li&gt; &lt;li&gt;Create a repository config for this repo on &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/etc/yum.repos.d&lt;/code&gt;&lt;/li&gt; &lt;li&gt;Install the packagecloud and my own GPG keys on the RPM keyring&lt;/li&gt; &lt;/ul&gt; &lt;p&gt;Note that this nginx package has a built-in OpenSSL library, and will in no way touch your OpenSSL library already present on the system. Also, the script will not install nginx by itself, since nginx is modular, and you may or may not want to also install its modules.&lt;/p&gt; &lt;p&gt;In order to build those packages, I’ve written some Docker recipes for EL7 and EL6, which are fit for either RHEL or CentOS, and those recipes are also open to scrutiny at &lt;a href=&quot;https://github.com/bcdonadio/nginx-alpn&quot;&gt;GitHub&lt;/a&gt;.&lt;/p&gt; &lt;p&gt;I’ve also modified the &lt;a href=&quot;https://packagecloud.io/install/repositories/nginx-alpn/mainline/script.rpm.sh&quot;&gt;default packagecloud Yum repo install script&lt;/a&gt; to also include the installation of my public GPG key, to ensure extra protection against unauthorized modifications on the package.&lt;/p&gt; &lt;p&gt;I vow to keep this repository updated until the EOL of RHEL7 and RHEL6.&lt;/p&gt; &lt;p&gt;If you liked this, drop me a line in the comments and star the repo on Github!&lt;/p&gt; </description> <pubDate>Sun, 07 Feb 2016 05:18:00 -0200</pubDate> <link>https://bcdonadio.com/2016/nginx-alpn-el/</link> <guid isPermaLink="true">https://bcdonadio.com/2016/nginx-alpn-el/</guid> <category>sysadmin</category> </item> <item> <title>Freeing myself from my ISP with LTE and a Raspberry: Part 1</title> <description>&lt;p&gt;I really, really hate my ISP.&lt;!--more--&gt;&lt;/p&gt; &lt;p&gt;That said, let me explain what I wanted. Since my connection is very unstable, and still running on VDSL with cooper wires of the PSTN installed 50 years ago, I decided that a second internet connection was needed. Even when fiber is finally deployed here, it’s still a good idea to have a completely separated connection from a completely unrelated provider, since clusterfucks do happen. Often.&lt;/p&gt; &lt;p&gt;So I was left with some interesting choices (but none of those ideal): pay the only other ISP which has cables around here (and uses DOCSIS over coax, which isn’t really better) which has a ridiculously low transfer cap, get a long-distance wifi provider with low speeds and poor receptivity, or go with LTE. By the title, you already know what I choose. Despite also having some very low data caps, it had two main advantages: in case of heavy rain - which frequently disrupts the cabling - it would still survive, and the fact that I was &lt;em&gt;already&lt;/em&gt; paying for it (for my smartphone, duh). All I had to do was getting an additional SIM card and an USB modem. And so I did: the choosen one was the Huwaei E3276.&lt;/p&gt; &lt;p&gt;&lt;img src=&quot;/images/freeing-from-isp/e3276.jpg?1&quot; alt=&quot;Huwaei E3276&quot; /&gt;&lt;/p&gt; &lt;p&gt;On MercadoLivre.com.br there was an interesting reflector/enclosure from the company Aquario, which gave me a way to fix the modem in the tower and a protection from the weather. Plus, there &lt;em&gt;may&lt;/em&gt; be some little advantage of the reflector in the signal quality, but I’m skeptic. Really, it shouldn’t make noticeable difference.&lt;/p&gt; &lt;p&gt;&lt;img src=&quot;/images/freeing-from-isp/reflector-aquario.jpg&quot; alt=&quot;Aquario USB Modem Enclosure&quot; /&gt;&lt;/p&gt; &lt;p&gt;However, there’s an issue: for LTE to work well at full speed, direct view to the ERB is needed for the LTE modem. Lucky me, my father is a ham radio, and - yup - we have a &lt;strong&gt;goddam antenna tower&lt;/strong&gt; above our roof. The tower provides the view to the closest ERB without obstacles in the &lt;a href=&quot;https://en.wikipedia.org/wiki/Fresnel_zone&quot;&gt;Fresnel zone&lt;/a&gt;. And yes, I calculated it.&lt;/p&gt; &lt;p&gt;The main obstacle is a tree close to my house, just in the straight line connecting my antenna tower and the ERB. Knowing the height of the tree, the height of the ERB, the distance from my tower to the ERB, the distance to the tree meters and the lowest frequency that will be used, all I had to do is solve for x. Google Earth Pro provided easily with the measurements. An engineering background comes in handy those times, huh?&lt;/p&gt; &lt;p&gt;Then I noticed another problem: USB only extends for 5 meters without repeaters, and the 5 volts quickly fall with a lot of cabling, making the connection unstable or even unusable. The USB host had to be close to the modem, probably in the tower itself. But hey, that’s easy. I just got a Raspberry Pi that I had laying around, and bought a waterproof case to put it inside. To power it, a Power over Ethernet splitter and injector was bought. For the Power Supply, I used an old notebook brick which gives 19V and 2A.&lt;/p&gt; &lt;p&gt;&lt;img src=&quot;/images/freeing-from-isp/poe-adapter.jpg&quot; alt=&quot;Power over Ethernet Adapters&quot; /&gt;&lt;/p&gt; &lt;p&gt;Now I had to transform those less-than-19V in which comes from the Ethernet cable (cooper losses, remember?) into steady, noise-free 5V to power the Pi. For this task, I bought an USB vehicle charger, tore apart, and 3D printed a slightly better case with a good barrel connector (instead of the original ligther connector).&lt;/p&gt; &lt;p&gt;Finally, now that I had all this I thought… fuck it. Let’s add an WiFi adapter and a high-gain antenna. This enables me to get connection from nearby open hotspots, which have the wonderful advantage of &lt;em&gt;not having data caps&lt;/em&gt;. I chose the TP-Link T2UH USB 802.11b/g/n 2.4/5GHz adapter. The antenna was an omnidirectional one from a local manufacturer, which advertises 12dbi gain - but again, I really doubt it.&lt;/p&gt; &lt;table&gt; &lt;tbody&gt; &lt;tr&gt; &lt;td&gt;&lt;img src=&quot;/images/freeing-from-isp/antenna-12dbi.jpg&quot; alt=&quot;12dbi 2.4GHz Antenna&quot; /&gt;&lt;/td&gt; &lt;td&gt;&lt;img src=&quot;/images/freeing-from-isp/t2uh.jpg?1&quot; alt=&quot;TP-Link T2UH Wifi Adapter&quot; /&gt;&lt;/td&gt; &lt;/tr&gt; &lt;/tbody&gt; &lt;/table&gt; &lt;p&gt;In the next post, I will talk on how I put all those things together, and how I installed it in the tower.&lt;/p&gt; </description> <pubDate>Sun, 07 Feb 2016 05:18:00 -0200</pubDate> <link>https://bcdonadio.com/2016/freeing-myself-from-my-isp/</link> <guid isPermaLink="true">https://bcdonadio.com/2016/freeing-myself-from-my-isp/</guid> <category>networking</category> </item> <item> <title>Easing the use of Xen&apos;s xe with xeh</title> <description>&lt;p&gt;At Propus, the company I work to, we’ve been using XenServer to virtualize most of our servers. This is great and practical, except for one little detail: the CLI administration tool, called xe, is a huge PITA to be used on a daily-basis.&lt;!--more--&gt; Also, there’s a GUI administration tool, called XenCenter, but only for Windows. Well, there &lt;em&gt;are&lt;/em&gt; some GUIs to GNU/Linux, but those are either abandoned or are completely incapable of doing something useful.&lt;/p&gt; &lt;p&gt;So, I decided to write a simple script to ease some of the repetitive, mundane actions of administering Xen. It’s called xeh (as in XE Helper), and it’s publicly available at gitHub under GPLv2.&lt;/p&gt; &lt;p&gt;&lt;a href=&quot;https://github.com/bcdonadio/xeh&quot;&gt;https://github.com/bcdonadio/xeh&lt;/a&gt;&lt;/p&gt; &lt;p&gt;As it was written in BASH in a single day (TODO: learn Python), there’s a lot to improve. I plan to update it as the use in the company grows and new features are requested or bugs are found. Eventually, I might write it again in other (safer, faster, better, real) language, but for now it’s fulfilling my needs.&lt;/p&gt; &lt;p&gt;The main features are the following:&lt;/p&gt; &lt;ul&gt; &lt;li&gt;Remote administration&lt;/li&gt; &lt;li&gt;Automatic SSH tunnel and VNC session creation&lt;/li&gt; &lt;li&gt;Simplified object search interface&lt;/li&gt; &lt;li&gt;Ability to use a predefined list of servers and credentials&lt;/li&gt; &lt;li&gt;Good user input validation&lt;/li&gt; &lt;li&gt;Human readable memory value input&lt;/li&gt; &lt;li&gt;Consistent user interface&lt;/li&gt; &lt;li&gt;Just take care with maybe some loose ends!&lt;/li&gt; &lt;/ul&gt; </description> <pubDate>Wed, 04 Sep 2013 13:51:00 -0300</pubDate> <link>https://bcdonadio.com/2013/easing-xen-with-xeh/</link> <guid isPermaLink="true">https://bcdonadio.com/2013/easing-xen-with-xeh/</guid> <category>development</category> </item> <item> <title>Getting an IPv6 connection with SiXXs</title> <description>&lt;p&gt;Actually no major ISP (at least consumer-grade) provides IPv6 traffic nor addressing on theirs networks in the country where I live, but I’m curious, and couldn’t be left out of one of the biggest changes in the Internet since the first browser.&lt;!--more--&gt; So, I came out looking for tunnel options, and found that SiXXs is providing free tunnels to anyone who wants it and have a reasonable motive. Lucky me, plain curiosity is an acceptable one.&lt;/p&gt; &lt;p&gt;After a little bit of research, I found out that there is a PoP in my country, and it does provide a fairly low latency for one of continental dimensions like mine. This provider is CBTC (brudi01) and it has a ping time of ~45ms from Porto Alegre. So, everything was perfect, except for a little detail: how the hell IPv6 works?&lt;/p&gt; &lt;p&gt;The first thing to notice is that IPv6 is completely independent from IPv4, also you can (and need) a complete different set of rules in your firewall to both. So, you thought that keeping a single set of consistent rules was hard, don’t ya? Try to keep two, for every single computer. You should use ip6tables to handle those IPv6 rules, and try very hard to not mix everything up. Also, you can’t really simply copy the rules from one to another, because, as they have - on a quick look - almost the same options, they’re not perfectly compatible. Mainly regarding ICMP packets, where they’re almost completely different. They even baptized the new protocol with a new (but not very creative) name: ICMPv6.&lt;/p&gt; &lt;p&gt;After being accepted as a SiXXs user, I could choose my tunnel type, and the options were the following:&lt;/p&gt; &lt;ol&gt; &lt;li&gt;Static: really not an option for me. I don’t have a static IPv4 address, and SiXXs draw ISKs (a system of credits, to punish bad users) from users who do not keep their static connections up 24x7. As I have a dynamic home connection, with a very noisy desktop, this was simply not an option.&lt;/li&gt; &lt;li&gt;Heartbeat: I’ve chosen this one, as I have complete control of the NAT of my network. This modality doesn’t draw credits if a tunnel stays down, and even add credits if you open a connection every day.&lt;/li&gt; &lt;li&gt;AYIYA: the IPv6 MacGyver, is able to trespass almost every kind of NAT, but it poses an overhead both to the client and the server. To my desktop, this overhead would be negligible, and since I can use Heartbeat (I have full control of my NAT), this would be of little use to me.&lt;/li&gt; &lt;/ol&gt; &lt;p&gt;For some reason, I needed to wait again for approval of my tunnel request. A couple of hours later I was able to set my tunnel. Surprisingly, the AICCU, daemon that handles all those three types of tunnels, installation through apt-get on Ubuntu was so easy that almost took all the fun on setting the tunnel, except for a couple of details. Just type this and follow the wizard:&lt;/p&gt; &lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;c&quot;&gt;# apt-get install aiccu&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt; &lt;p&gt;Done with the installation, you need to understand that you received &lt;em&gt;two&lt;/em&gt; /64 subnets even without asking for an additional /48 one. The first one is the tunnel endpoint IPs, which ::1 will be the PoP router, and ::2 will be you. Actually, you can use this ::2 to any kind of traffic, because, as everything in IPv6, it’s globally reachable. But you will not be able to use any of the other IPs in this subnet. They’re assigned to your PoP, and will not reach you.&lt;/p&gt; &lt;p&gt;The second subnet is routed directly to you. Any packet traveling with the subnet prefix of yours (disregarding the 64 least significant bits of the address) &lt;em&gt;will&lt;/em&gt; reach you. In fact, when wiresharking the connection while pinging any address of the subnet you will see them. Cool, huh? Well, not yet. &lt;strong&gt;Those pings will not be answered&lt;/strong&gt;, because your device is not configured to answer to packets coming from any of those addresses of the second subnet. In order to your device be able to do so, you must add them to the list of IPs that it will answer to, as following:&lt;/p&gt; &lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;c&quot;&gt;# ip addr change &amp;lt;ipv6_complete_address&amp;gt;/64 dev sixxs preferred_lft 0&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt; &lt;p&gt;Repeat the process until all addresses have this bit set to zero but the one that you want as default. If you need to list the addresses added and to know the current default one, do as so:&lt;/p&gt; &lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;c&quot;&gt;# ip addr&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt; &lt;p&gt;The addresses marked with the &lt;em&gt;deprecated&lt;/em&gt; tag will be the ones with the preferred_lft bit set to 0. As more in the top that are the addresses, the later they were added. Remember that the ones with the deprecated tag will continue to be working normally, just won’t be the default ones for new outgoing connections.&lt;/p&gt; &lt;p&gt;Good, but now you have to make those changes permanent, otherwise, whenever the machine is either rebooted or the sixxs interface is brought down and up again, you will lose the addresses configurations made so far. You can do so adding the configurations you made in a new executable BASH script like /usr/local/etc/aiccu-subnets.sh or any other path, like this:&lt;/p&gt; &lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;c&quot;&gt;#!/bin/bash&lt;/span&gt; ip addr add &amp;lt;your_public_subnet_prefix&amp;gt;::1/64 dev sixxs ip addr change &amp;lt;ipv6_complete_address&amp;gt;/64 dev sixxs preferred_lft 0&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt; &lt;p&gt;Then you must tell AICCU to execute this file whenever the tunnel is established, appending the following line to /etc/aiccu.conf:&lt;/p&gt; &lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;setupscript /usr/local/etc/aiccu-subnets.sh&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt; &lt;p&gt;Finally, the glibc default DNS resolver (getaddrinfo(), also called gai) will be preferring to answer with the IPv6 address of a given domain whenever available. If your tunnel does have a fairly high latency, you may not want this. With the following configuration, the tunnel will only be used when the resolver finds out that the given hostname does only have an AAAA entry, as it will always prefer to answer with the A record.&lt;/p&gt; &lt;p&gt;In order to do so, you must add or uncomment the following line in /etc/gai.conf:&lt;/p&gt; &lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;precedence ::ffff:0:0/96 100&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt; &lt;p&gt;Now you’re ready to go rock out with your new IPv6 connection. Don’t forget that &lt;strong&gt;you must configure a firewall to the IPv6 stack&lt;/strong&gt; with ip6tables. You’re now globally reachable and there’s no NAT to protect you!&lt;/p&gt; &lt;p&gt;See ya!&lt;/p&gt; </description> <pubDate>Tue, 23 Jul 2013 02:14:00 -0300</pubDate> <link>https://bcdonadio.com/2013/getting-ipv6-sixxs/</link> <guid isPermaLink="true">https://bcdonadio.com/2013/getting-ipv6-sixxs/</guid> <category>networking</category> </item> </channel> </rss>