When I wrote my 2019 open source contributions annual review I had high hopes for my open source contributions in 2020. As I wrote in my 2020 health annual review I allowed the political upheaval in my home country, the US, to distract me way too much. Sure there was some COVID distraction in March/April but if anything I was actually hoping the lack of travel would give me time to focus more on code generation. It was not to be. That excuse aside, I still managed to put in 698 hours into open source projects. That’s a slight uptick from 2019’s 653 hours but short of the 1000 hours I was hoping to contribute. The distribution looks very different as well, with most of it concentrated around my work with The B612 foundation. The five projects I contributed to the most fall into a relatively broad range of software (from highest to lowest number of hours contributed):
I am learning how to work with Kotlin Multiplatform in a real world environment, which includes making websites with Kotlin/JS. I am all thumbs when it comes to CSS and have never done much with React. So a good way to really plow through that is to take concepts I want to replicate and then port them to Kotlin/JS (if possible). One UI feature that I’m exploring are formatted lists. We use them everywhere nowadays. Searching around I found this timeline example created by Florin Pop. It’s not a huge amount of code but it makes a pretty neat looking timeline view of a collection:
Questions I want to answer are:
- Can I reproduce this using Kotlin/JS?
- Is the source code clearer or more obtuse in Kotlin/JS?
- What does the generated Kotlin code look like?
I’m on a benchmark tear this past month. It’s just my level of excitement around the news around x86 alternatives. There are the ARM chips by Fujitsu running some of the fastest new supercomputers. There is the M1 chip by Apple. Now we have a potential new RISC-V chip by a company called Micro Magic which looks to be finally bringing performance into a range comparable to desktop-ish ARM chips. This article by ArsTechnica really wet my appetite. I wanted to see how this chip’s real world performance, assuming we take the benchmarks at face value, can compare to the CPU in the PineBook Pro (PBP).
This isn’t a click bait headline and I won’t have an answer either way be the end of this post. There was this article in my RSS feed yesterday pointing out that the Linux Foundation wasn’t “dog fooding” FOSS or Linux with their annual report. Buried in the metadata of the PDF that they circulated was the not too surprising fact that the brochure was created with Adobe Creative Suite on macOS Catalina 10.15. The blogger considers it quite the indictment. I’m not so sure but I would like to explore if Linux can be used by desktop publishing (DTP) professionals. I’ll caveat this exploration by saying it was the early 1990s when I was last at all seriously involved in DTP. So some of my conventional wisdom may be dated. With that stated, let’s explore the potential of Linux professional DTP.
For the last benchmark I am going to explore the performance of Java assembeled for the purposes of benchmarking Java-based compute systems, called Renaissance. There has been a change since I started this (see this previous post) though. Azul, a company that specializes in Java and JVM infrastructure, has released a version of OpenJDK that is compiled for Apple Silicon. I have therefore run the benchmarks both using AdoptJDK Intel installation running under Rosetta as well as the Apple Silicon Native M1 one by Azul. Let’s see how Orekit runs in these three environments. The full project and results is documented here.
The last benchmark of the .NET Platform that I have is the benchmarking suite that the .NET team put out here. It is literally thousands of tests covering all parts of the CLR. Nothing could be more thorough. As I wrote in this previous post I’m doing a series of benchmarks of .NET and JVM on Apple Silicon. While there are impressive native benchmarks the fact it will be some time before the .NET runtime has native support. I have to factor in the potential hit and problems with Rosetta. How much of a performance hit is there and will it be enough that applications targeting it will have problems? All code and results are published here.
For the second benchmark I am going to explore the performance of .NET compilation and benchmark performance using an Avalonia’s code base. As I wrote in this previous post I’m doing a series of benchmarks of .NET and JVM on Apple Silicon. While there are impressive native benchmarks the fact it will be some time before the .NET runtime has native support. I have to factor in the potential hit and problems with Rosetta. How much of a performance hit is there and will it be enough that applications targeting it will have problems? All code and results are published here.
For the second benchmark I am going to explore the performance of Java with a library I use on a regular basis for astrodynamics calculations: Orekit. There has been a change since I started this (see this previous post) though. Azul, a company that specializes in Java and JVM infrastructure, has released a version of OpenJDK that is compiled for Apple Silicon. I have therefore run the benchmarks both using AdoptJDK Intel installation running under Rosetta as well as the Apple Silicon Native M1 one by Azul. Let’s see how Orekit runs in these three environments. The full project and results is documented here.
For the second benchmark I am going to explore the performance of .NET rendering using an Uno Platform benchmark. As I wrote in this previous post I’m doing a series of benchmarks of .NET and JVM on Apple Silicon. While there are impressive native benchmarks the fact it will be some time before these two runtimes natively support it, I have to factor in the potential hit and problems with Rosetta. How much of a performance hit is there and will it be enough that applications targeting it will have problems? All code and results are published here.
As I wrote in this previous post I’m doing a series of benchmarks of .NET and JVM on Apple Silicon. While there are impressive native benchmarks the fact it will be some time before these two runtimes natively support it we have to factor in the potential hit and problems with Rosetta. How much of a performance hit is there and will it be enough that applications targeting it will have problems? All code and results are published here. For the first benchmark we are going to explore the performance of JavaFX.
Apple Silicon is looking pretty impressive. I’m impressed enough to replace my 2018 MacBook Pro with the shitty keyboard with a new M1 MBP. All the benchmarks though are useless to me since I’m primarily a .NET and JVM developer who will be running under emulation in Rosetta for the foreseeable future. I intend to quantify the performance of the new Macs versus the old Intel ones with a suite of benchmarks specifically targeting .NET and JVM runtimes.
I worked out how to do basic string file input and output for Kotlin Native using their standard POSIX libraries. The code for these methods is at the bottom. This article explores it in more detail if you are interested.
I’ve always liked having a record of what I’ve done on a project and a place for notes. That’s often been a notebook, updates to GitHub/GitLab/JIRA issues/tickets, or maybe blog entries. Those all have problems. In reading Masters of Doom I came across a passage which described the intense environment around the development of Quake. John Carmack came up with a concise running log of what he was doing, called a “.plan” file. It provided a frictionless way for him to keep track of his progress, the things he wanted to fix later, notes he had to himself, etc. He used it for himself but also posted it to the internet to keep the gamer community informed. You can read the whole archive of them from 1996 through 2010 here, although after 1998 they were more like a blog. I decided to tweak the style of his 1995-1998 system slightly and have been using this modified process for tracking my development on projects since November of last year. I call these files DevLog (very creative I know) and find it works so well that I’d share my methodology here.
Yesterday I was following a Twitter thread John Carmack where he was talking about optimizations. Someone suggested he do some sort of a series on how to build game code et cetera. His response was to point to something I’ve never heard of before Handmade Hero. This is a project started by a small group of developers back in 2014. Their goal is to produce the whole game as live coding so people can see how they, professional developers, build a game. I’ve watched previous live coding videos before and enjoyed them. My first was this person writing a vim-like program for CP/M in assembly language, link here. Sounds very dry but I actually get a kick out of seeing how other coders work. I’m just through the first video and am pretty fascinated.
I’ve spent the last few days somewhat diligently playing around with Rust. That’s mostly been studiously reading The Rust Language Book and doing some of the examples. I’m quickly tiring of that and will have to move on to koans, tutorials, or just some projects. However each day I’m learning a bit more about rust. There is a little more insight each day, mostly positive, but one area I am having some concerns is the area of error handling. Specifically I’m concerned about their lack of any traditional exception handling and in its place only returning error objects or panicking (crashing) the whole program.
As I tweeted about over the weekend, I’m going to be starting a month of deep diving into Rust for the month of May. I’m not trying to be a complete convert. I have work to do after all. However I really want to explore this new lower level language as opposed to my day to day work in managed and interpreted languages. I’m going to try to spend an hour or so each day working through tutorials and maybe trying to build my first real application with it. I’m starting with the Rust website seems to have some great resources like a language guide book and tutorials. I’ll go on from there. Today however was just getting the system setup and working through my first hello world tutorials. So far it’s been a mostly positive experience.
I’ve spent today dusting off my old code Diaspora API driven blog comment system. The details of that implementation can be found in this blog post from late-2018. Now that it is running on a production server thanks to Diaspora-Fr I have revived the code running on the server and pointed it to their Diaspora server. I never coded up a full handshake for the initial authentication steps so that is all manual unfortunately, but I believe it is now up and running. The way I coded the server it can only point to one host at a time but since this is proof of concept right now that’s legit. For the time being I’ll be linking against that Diaspora server for comments on threads. You can comment from any Diaspora server though, not just that one. If you don’t have a Diaspora login then you can simply read the comments.
I’m pretty stoked about what I was able to do in 2019 towards open source software. I’ve always contributed here and there but I took the momentum of contributions I did in the second half of 2018, in that case to the Diaspora project, and just kept on trucking. I spent a total of 653 hours on open source projects in 2019. A lot of that was new code generation but there is of course more to development than just writing code. There were lots of meeting times, some hackathons, documentation generation, tech support etc too. Some of these were projects I started as well as contributing to established projects. The five projects I contributed to the most fall into a relatively broad range of software (from highest to lowest number of hours contributed):
Necessity is the mother of invention. I’m working on a project where it seems that storing and manipulating documents is the way to go instead of the relational database route. Maybe it’s too much time having worked with Mongo but it just feels naturally to me. The go-to embedded database is of course SQLite so I start up with that and some of the new document processing capabilities that it has. Then it occurred to me to ask if there is a NoSQL Document database equivalent to it. Sure enough LiteDB is one and it is built natively for .NET. After using it a bit it was clear I needed to inspect and manipulate the data stored in it not just in my app but on the side. While the website shows literally half a dozen ways to do it literally all of them are stuck to running only on Windows. After a few days of suffering through a Windows VM with that being the only reason I decided to take some of my newfound skills with Avalonia and build a client that can run on Linux, Mac, and Windows too. This begat LiteDB Portal.
I’m working on prototyping some new desktop and mobile applications. One of the things I want them to be able to do is the “infinite scroll” workflow that you see in social media timelines like on Twitter, Diaspora, and etc. It’s essentially when you almost get to the bottom of your timeline it automatically loads it with more informaiton. As usual my go-to framework for doing the desktop is Avalonia. I’m using a basic
ListBox so my first thought was to simply look for scroll events and scroll percentages (or some metric like that). It turns out that’s not directly and easily exposed. The solution was to manually wire up similar event handlers using similar properties that are exposed in more raw terms. Below is a break down on how I did it. You can find the solution to this in this Gitlab Repository.
(I want to thank the Egram for writing their Avalonia-based Telegram client with an open source license and publishing it here. The way they handled more complex scrolling behavior interception led me to this solution). Thanks to MakcStudio for cluing me into the existence of this project and their source code.
** Note this is a second version with a cleaner implementation of capturing the events using
I am embarking on making some libraries that have a chance to get large and that I wanted to be independent from the applications that will utilize them. It’s really all about easy and scalable dependency management. In Java we usually do this with Maven or Gradle. Under .NET we have NuGet. Of course there are command differences across the three but the idea is the same: just say which packages your code depends on and we will do the rest for you. They have another neat feature where if you write our own library you can easily bring that into your other project’s dependencies too. For public projects you want to circulate you can push them to the same repositories you download the other dozens (or hundreds) of libraries you use. For local development you can use them too. I thought it’d be as simple as
mvn install or
gradlew install in NuGet. Would that it were so simple. The long story short is that if I were developing on Windows it’d be slightly more cumbersome but not that difficult. For Linux, and I believe for Mac, however there is a lot more setup that needs to happen. Worse, the documentation for doing local repositories is a bit hidden and there are things you have to do in a few places. I’ve decided to document them all here.
Avalonia is on the verge of releasing their 0.9 update (it’s up to it’s fourth preview). The .NET Core system released 3.0 was released last month. I’ve been working on non-Avalonia related projects for the past several months but when last I left all my tutorials and projects they were running on Avalonia 0.8 and .NET Core 2.1. I’m looking forward to the near term release as well as having some upcoming projects that I think could be suited for Avalonia. In preparation I went updated all of my tutorial code repositories in a side-branch waiting for the day 0.9 hits prime time. Through that process I learned about some of the small and not so small (but all good I think) changes people may encounter migrating versions.
(As I side note, this is why it is a good idea to always be forward migrating code as libraries and runtimes jump versions. It’s easier to tweak from one release to another rather than two, three, or more.)
For a very long time I’ve been considering contributing to open source financially not just with code contributions. The model I was originally working on was one where I’d “buy” the free-as-in-beer open source software that I was using regularly. Looking at my software stack it’s almost all projects which don’t necessarily have huge corporate backing. Yes I use GitHub but that’s essentially Microsoft at this point. Yes I use Java but that foundation has huge corporate sponsors. Yes I use Linux which has lots of sponsors for some pieces but the projects I use are the smaller off to the side ones. So how did my “buy the software” model work? It turns out that plan sucks.
Between the months of July and August I had a month of travel. Part of that was a three week trip to southern Europe and another week at my first astrodynamics conference in a very long time (the AIAA/AAS Astrodynamics Specialists Conferenec). Because of that I actually forgot to post some of the things that I had been up to for the first half of July, before my trip. I’m therefore combining my open source contributions update to cover both months in one post. One of the biggest shifts you’ll see is my focus for the time being. While I spent much of May and June focusing on Avalonia or development using it my focus in the past couple months, and for the time being, is shifting to a project called B612 Foundation which is a non-profit organization looking to make strides in improving our ability to track near-Earth asteroids and give us enough warning time to mitigate a potential impact if one is predicted to occur. The work I’m doing on their open source astrodynamics engine and related tools is the perfect merger of my interests and technical capabilites: software engineering and aerospace engineering.
Back in June on a lark, or maybe it was some nostalgia kick induced by an article I read somewhere, I wanted to see if SETI@Home. This was a system designed by UC Berkley around 2000 to turn spare CPU cycles on otherwise idle computers into a massive distributed computing infrastructure. It turns out that not only does it still exist but in the nearly 20 years I hadn’t been paying attention it turned into a giant ecosystem of so-called “Volunteer Computing” (VC) called Berkeley Open Infrastructure for Network Computing (BOINC).
It’s been awhile since I’ve done development around Diaspora regularly, or anything associated with the API. When I saw the announcement of the work being done with the API at their Hackathon back in April I was pretty stoked. It looks like it will be on track for the 0.8.0.0 release which hopefully will be in the near future. I was especially excited to see that there is a possibility of someone putting it up on a live-test server to work with. To get ready for that I wanted to make sure that my two code bases, the “test harness” and the “Comment Reflector” that used it to create comments for a blog as described in this blog post, worked as soon as a server went live with the code.
I follow a lot of open source developer blogs, including some from project-based blog aggregators like Debian, Ubuntu, and some .NET developers. One of the things they do that I like is that they provide a monthly summary of the open source contributions. In some cases I’m pretty sure it’s part of accountability for getting some funding to work on the project. In other cases I think it’s just a little historical tracking on their part. Some people make lots of changes and others just have one small package they incremented slightly. As I (hopefully) continue to ramp up my open source software contributions I want to begin that process as well. Some of these are relatively inconsequential things. Many of them are documentation-driven. All of them however I hope will help open source projects or users of open source projects in at least some infinitesimal way. Worst case I’m just publically documenting my own experiences to make for easy reference for myself as well as potentially othersThis will be the first such summary articles.
As I get ready to do a tutorial on a live simulator rendering using nothing but Avalonia it looked like using the standard event handlers in .NET and ReactiveUI that are the underpinning of Avalonia data flow would be the easiest way. There is a great example of this too with Nikita Tsukanov’s Avalonia BattleCity Game Demo which I was going to crib off. The first thing that came to mind though was what exactly is the throughput and overhead of .NET’s event handling system. Coming from Microsoft and with it being the core of many patterns not just the UI, but the UI is where we most often prominently see it, I was curious if it would break with large number of objects or handles. I didn’t expect it to be a slouch but exactly how far can it be pushed? This has most certainly done by others and probably even more thoroughly but this is new to me and I found it instructive. This is a write up of my exploration of this and the results.
Tree Views are a standard control for looking at hierarchal data in a user interface. Avalonia has a
TreeView Control as well. In this tutorial we are going to go over creating a
TreeView control for a league roster system. This is based on a similar tutorial written for WPF by Mike Hillberg that you can find here except that we are skipping the manual list aspects of it. As an FYI, I’m doing my work in JetBrain’s Rider IDE so screenshots and instructions will be from there but this obviously works with any editor. You can find the final solution for this blog post in this Gitlab Repository.
A common way for displaying and manipulating tabular data is some sort of spreadsheet like area which is often called a data grid. In Avalonia that control is literally
DataGrid. Starting with a standard project it’s relatively easy to get started however there are a couple of important steps that can be easy to overlook. We are going to go through starting from the default template all the way through to having an application that allows one to edit a data grid. You can find the final solution for this blog post in this Gitlab Repository.
There are many ways to lay out UI elements in Avalania from the
StackPanel, which stacks things on top of each other or along side each other, to the
Canvas which allows one to specify the exact pixel position of any element. The
Canvas may sound like the way one would want to go because it gives the most control but in actuality it’s overl prescriptive. If you notice carefully how most user interfaces work they dynamically resize as windows are resized. They try to maintain a sense of proportionality with the drawing area being given to its window. The rigidness of the pixel-based system is sometimes useful but more often than not the panel type you’ll be wanting to use in Avalonia is the
Grid. Here is the Part 1 of a multi-part series on the features of the grid and how to use it. You can find the final solution for this blog post in this Gitlab Repository.
In Avalonia Buttons Multiple Ways I went through a tour of the various ways one can bind buttons to commands and events. I wanted to explore the topic just a little deeper and look at some of the more advanced scenarios of dealing with binding behavior, specifically passing parameters to commands and setting the button’s enabled status based on other data in the View and View Model. Let’s build a little app to show how to do this, but you can find the final solution for this blog post in this Gitlab Repository.:
Buttons are a pretty fundemantal UI element that we all must leverage. How does one go about interacting with and getting feedback from button clicks? It turns out there are multiple ways and there are no wrong answers but there may be some better answers than others. I’m going to go through the various ways one can intercept Button events, their feedbacks, and the pluses and minuses of each of those elements. You can find the final solution for this blog post in this Gitlab Repository. We are going to start with simple event actions that update a field in our view model. We will then look at how to wire in an asynchronous call to a dialog box to show handling feedback within the event handler. This will illustrate how Buttons that invoke besides directly changing a View Model field can be worked. The standard tutorial covers that case. The ways we will be attempting to wire up these buttons are:
- Responding to the Command via a simple Method on the View Model
- Responding to the Command via a Reactive Command response on the View Model
- Responding to a click event via XAML
- Responding to a click event via code
- Responding to a other events
- Adding Asynchronous Handlers
Finally! I’ve been trying to get Jekyll code highlighting working since I first ported my blog. Earlier documentation says you have to install stuff which later ones said is wrong. “It just works” they say. Yet when I tried to get it working it always came up with nada! The documentation left out one really important part…the style sheets. It’s sad how long I went back and forth with this to have it simply be the equivalent of “is it plugged in”!
This week has been a crazy deep dive on Avalonia. I’ve managed to spend over 30 hours on it and related projects. Like with any new thing you learn things that were initially confusing have become more second nature. I think part of that is that I more than doubled the amount of time working with it in a pretty focused period of time. There were certain points where things were just coming together almost automatically in my mind. At one point everything was just humming but yesterday I hit a good sized roadblock trying to learn some more complex behaviors. It’s a common problem when learning anything, or even running. You hit this point when all of a sudden everything is just moving in complete efficiency and it starts feeling effortless. It can be euphoric, however like anything euphoric it wears off and a reality will set in eventually. Yesterday it was getting my face cracked trying to get a TreeView control working and getting notifications of event changes within DataGrids properly percolating around. To be fair this is getting into using this in more complicated ways which means that the simple responses in the developer assistance area is going to get harder and harder. Sometimes the answer is going to be “it depends” or “it can’t do that” or even “I think it should work like this but not sure”. I’m still going through the guantlet on it but still I’m stoked by the entire development experience, have a few applications I want to write in it, and am looking forward to contributing to the project.
When doing visual design of forms (or in this case views) it’s often best to see what it looks like with real data. Just like with mocking data in automated tests it’s possible to use a built-in XAML feature to achieve the same thing with graphical editors. It’s a XAML field called
Design.DataContext. We’ll explore that here through the standard default auto-generated MVVM application which you can follow along how to create here. To see the final solution you’ll get at the checkout The Avalonia Example Repository and look at the
DesignDataExample or browse directly here.
Getting started with a new library, framework, or development system requires two important things: documentation and code samples. With both of those a developer has enough to start fiddling with things and get bootstrapped. Code samples without documentation can be like searching a dark room without a light but is doable Documentation without samples can be far more difficult than that. In fact for more complicated APIs it can be impossible without source code, which I guess is sort of code samples in and of themselves. For a lot of projects the code samples I go to are the unit/automated tests. What better place to see how the library is used than in there. However having actual code samples, especially for more complicated libraries, is even better. In the case of Avalonia the project has shipped a “controls catalog” as part of their mainline source for a while now. However now you don’t have to pull down and build the full Avalonia source code to get it since there is a stand alone version of it. Here’s how to use it with Rider and AvalonStudio.
In my write up on Avalonia first impressions one of the things I was most missing was a full IntelliSense/Auto-complete style system for the Avalonia XAML and the ability to create components in the IDE on Linux. “The IDE” in both of those cases was the one I had focused on which was JetBrain’s Rider IDE. When I originally wrote the article I thought there was no alternative so I’d have to use a workflow akin to doing it in a pure text editor. However as I was in the middle of editing I discovered that the Avalonia developers are actually working on a full IDE built with Avalonia called AvalonStudio. While it is in heavy development in beta and missing some key features because if it I have to say it is impressive how many features it has and how well it works already. Could it replace Rider for .NET development right now? No. Could it eventually? Maybe, but that’s not why it is interesting to me. It’s interesting to me because it has the facilities I sorely wanted in Rider but found missing: XAML IntelliSense (and preview!) and Avalonia component creation with proper namespace behaviors. So how does one go about running it?
Update (26 Mar 2019): The repository does in fact have binary installers in the releases not just source code. The Debian installer installed correctly but made a bad menu shortcut. Otherwise the net effect was that it worked as well as the method below but with less trouble. The releases are here. The binaries are under the “Assets” drop down for each release. To run it you need to edit/create a shortcut to the below command or execute it on the command line:
I’ve documented the first forays into doing cross-platform .NET development with Avalonia. I’ve stated that I’m overall impressed. However what are my deeper thoughts on it?
As I wrote in my Avalonia Hello World (On Linux) article I’ve made more progress than just executing the canned auto-generated Hello World. I’ve actually been through their one official tutorial and then some. You can find it on their website here. It will walk you through the steps of making a simple proof of concept “To Do List” application which shows you all of the steps of creating a simple application, adding controls, creating reactive controls, and how the Avalonia System works. It has two paths. One for those using Visual Studio on Windows and another for those using the .NET Core command line tools. Since I’m sticking with the whole doing everything on Linux thing I’m using the latter.
As I pick up doing cross-platform application desktop application development using AvaloniaUI I need to go through the obligatory hurdles of the “Hello World” program and following tutorials. I figure why not document them here for others too. Fortunately they actually provide some pretty solid getting started and tutorial guidelines so this should be more considered my personal notebook of those.
In my 2018 attempts at ditching the walled gardens I made a bunch of progress, which I’ve since backslid from, on replacing Google services with Kolab Lab’s offerings. I want to have the benefits of GMail, Google Drive, etc. but I would rather not have Google owning all of my data. At the same time I’m not going to fall on my sword and go back to mid-1990s infrastructure a la Richard Stallman either. Self hosting these things is a daunting task which I considered to be way out of the reach of this software developer. I thought that anyway until I ran across the FOSDEM session on the YunoHost system, which makes self-hosting a much more out of the box experience. Could this be the solution to my problem?
After several months of dormancy in my software development activities I’ve started hitting a solid pace of getting back into the swing of things recently. As much as I wanted the next big thing for me to work on to be something Fediverse related, specifically Friendica, that has created a huge mental block for me. I wrote about that a lot in this post. I’m not a language snob, more on that below, but getting fired up about doing PHP work on that project isn’t happening. I still never got to the bottom of if it was more PHP or the inertia of getting started on the project. It doesn’t matter either way because I wasn’t getting anything done. I wasn’t sure if maybe it was a general lull. I think I’ve answered that question in the negative. So what is this looking like then?
I took the deep dive into the Fediverse last year when I decided to bite off the Diaspora API development task with Frank Rousseau. It was a great experience and I had hoped to do a lot more Diaspora work. With a lot of the ActivityPub discussions and there being some really good questions about how that should work I had embarked on an experiment to see what a merged Fediverse Social Media experience would feel like. Friendica has tie-ins to Diaspora, ActivityPub, and many others. It was a great candidate for it. I am way behind on doing my write up but I have my notes. That’s for another post. This post is about a conundrum I’m facing with respect to my open source/Fediverse contribution conundrum. That conundrum is: I don’t know which project(s) I want to focus on any longer.
I remember the first time I had to integrate myself into a new community. It was right after college. I had started my first job which was in a new specialization of my industry. I had to come to grips with a life transition, learning how to work with a new team and new software, learning about the ins and outs of the industry around me and those interactions, et cetera. It is a very unsettling position to have orders of magnitude more things to learn than time to do it. No one expects someone to pick it all up instantly but in me there is a drive to “come up to speed” as fast as possible. When it comes to contributing to the Fediverse I am feeling the exact same thing right now.
“Dogfooding” software is one of the best ways to wring out any problems with a design or implementation. The Diaspora API was designed with a wide variety of uses in mind including something potentially as grand as being the replacement backend for a revamped website. With the actual API now “in the can” and waiting for the real PR review I decided to try to use the API for an actual purpose and start dogfooding it. I had several ideas but the first one I decided to latch on to was a blog discussion timeline feature.
We’ve finally done it! Frank and I were able to get the last of our internal reviews done and the API code is now in the “real” code review for integration into the main Diaspora development branch. That alone is an amazing thing but I have a second piece of big news related to the API as well. Today I was able to stand up a first version of a blog “Discussion Browser” that uses the API to pull all comments and other interactions for a blog post that is associated with a specific Diaspora post. I’m going to be doing a write up of that in more detail later but as a first cut it worked pretty well and showed that the API design and the code itself is functioning pretty well.
Some people just can’t leave well enough alone, I swear! When last I left Spring Boot world everything was going great. The project bootstrapping was pretty straight forward. The documentation pretty much matched the actual behaviors. The actual behaviors were pretty well laid out. Today I tried to create a project from scratch. Between fighting Java version hell from the online generator, to fighting gradle dependency hell both there and in IntelliJ, to then wrestling with some new fucked up syntax for something as simple as reading in the configuration file I have wasted two hours and gotten absolutely fucking nowhere!
I was so excited when I finally got a real pod interacting with the API that I knew I’d have to get it written down before I could get to sleep. However before dropping right to the interactions itself I decided to take some time describing how a piece of software would be allowed to do anything with a server. In Part1 I laid all of those details out to get across some very important points:
- We are using a standard (OpenID/OAuth2) protocol for doing this
- Users have to give explicit permissions to an application, including being told what it is and is not asking to do
- There are security measures once an application is granted permissions as well.
This article essentially details the very first communications and gives people a feel for what the Diaspora API specification looks like in practice not just in theory.
Okay I’m obviously over excited about the fact that something which I knew should work actually did work. However all the previous API usages were on servers on the local machine, not behind an HTTPS link, and not being shared with the rest of the fediverse. This one breaks through that barrier. I have therefore decided to document it in excruciating detail. For the first pass all of these interactions were manual using cURL and FireFox RESTClient plugin. The next step, which will be coming up very shortly, will be creating the very first server to use this for a real purpose (I’ll document that as that happens). This document goes over the nitty gritty details of the whole authentication piece. The next article will go into the calls themselves. If you don’t care about the nuances of the authentication steps then just skim or skip this and go to the Part 2. So without further ado, here we go…
As we begin to wrap up the year we also are beginning to wrap up the API getting ready for the “real” pull request for the API code. We are down to one last code review of the final clean up pass before we have it looked at by the core team. I think the code is pretty solid but it will of course have problems that are discovered during the review and the testing. Ah the testing, real world testing that we really need to do. To get there we need to have a test server. Thankfully that’s all taken care of now and we’ve had the first data interactions with a pod.
As I get more and more fed up with Facebook while also getting more and more embedded into the Fediverse I’ve been considering the whole #deletefacebook campaign again. I turned off Facebook earlier but never deleted it. As the new year approaches the thought of shutting it down appealed to me but then I went a step further thinking I should just blow away all of my data as well. There are lots of posts with lots of data and lots of associations that I want to keep though. Thankfully Facebook provides a mechanism for extracting your data. Unfortunately if you assume all of your data is there you’ll probably be wrong.
I’ve been blogging on Wordpress since 2013. For a long time I had wanted to blog and tried LiveJournal and sites like that. It wasn’t until 2013 when I was deciding to embark on a personal fitness experiment that I finally bit the bullet and created the N=1 blog. The original premise was exploring the whole area of Quantified Self and longevity for my own purposes. It was going to kicked off by a grand experiment of living various different fitness lifestyles for periods of time to see if any made a dramatic difference, positive or negative. I never really got too far into that experiment. Then the blog became my ramblings on the topic. Over time I had less interest in that and more in software engineering. Rather than create a whole new blog I decided to just add new categories. As the boundaries of what I wanted to post became less clear it really just became my public journal on all topics interesting to me.
Today is a momentous day in the Diaspora API development saga. Today we have completed primary development of the API, the unit tests, and the external test harness. There are still two code reviews between that and the real code review for integration into the main development branch, but all of the major work is complete. What does that mean exactly?
Boy are we really coming down the home stretch now! All of the scopes are implemented in every API endpoint now with their corresponding tests to confirm that the permissions are working correctly. The most difficult of those, I thought, was the Streams, again. After beating my head against a rock a lot yesterday I put the whole project down for the day and then picked it up today. After warming up on the other endpoints I started working my way through getting Streams working such that it could filter private data. After a bit of fumbling I finally got a relatively simple solution to the problem and got all the tests passing correctly.
It’s been almost a week since there’s been an update on the API. I’ve been busy with other things and travel so it didn’t get as much focus as I would have liked to have given it. However there has been some progress. Thanks to Frank’s help we’ve been able to get all of the side branches merged into the core API branch so that we are now coming down the home stretch on getting it ready for integration. The first order of business for that is getting the OpenID security stuff squared away. I’m still working on understanding that better and the more I go back to it and work with it here the better that looks. There is still the question of the "refresh token" workflow but work has been done on it so if anything it’s a small tweak thing or a documentation thing versus a from scratch development thing. Even in the event that it was a from scratch thing with the code base I have and the examples I mentioned before it shouldn’t be a huge effort to get that working. Most of the security work is therefore integrating in the much more fine grained security scopes which Senya has been working to hone.
While I post mostly on Fediverse platforms like Diaspora and Mastodon, and am focusing my development efforts there, the instant messaging accessibility of Facebook Messenger has been illusive. I tried Wickr and it’s okay but not the most user friendly. It’s claim to fame is the messages only go from the participant to the receiver without a server between except for authentication. That makes the flow clunky, to say the least. Which is why leaving Facebook has been one of my least successful aspects. I wanted to explore other options and in the last week I pulled the trigger and actually did it.
With the documentation changes wrapped up, but holding off on PR’s until things solidify up a bit more from the code scrub process, it was time to move on to the OpenID deep dive and review. Up until now I’ve been working with an authorization workflow that required me to request a new token ever 24 hours and for the user to authenticate it. I wasn’t sure how much of that was because of the flow I chose or intrinsic to how it was coded up. As I continued to go over the OpenID documentation and other articles on the process I just couldn’t get it working. It was then clear to me that what I needed was an example to help me.
Luckily Nov Matake created some example projects to go along with his OpenID gems, one for the OpenID Connect Provider (the server side) and one for the OpenID Relying Party (the app side). I figured with that everything would be good to go. After all this was the same code he had running up on Heroku but I wanted to see the nitty gritty details and set it up on both sides since I was going to need to do that with Diaspora and the test harness, or any other API use case I may be interested in. As I had come to find out quickly these projects have never been updated. They still rely on old versions of Ruby and Rails. Instead of trying to downshift everything to these versions I decided to fork the projects and get them running under Ruby 2.4+ and Rails 5. Unfortunately that derailed my entire Diaspora development effort for the day. The upside is that the community will have modern versions of these projects to use. I intend to polish them up a little more and then issue a PR back to the original project. My versions however can be found on my GitHub profile with the Connection Provider here and the Relying Party here.
In the process of doing these upgrades I was able to learn a lot more about porting Ruby code up from older versions. I also got a much better understanding of some OpenID flows. I’m going to use that to continue to move forward on the review of the implementation in the API and looking at client side implementation details. Because of the complexity of that whole process I think that’s probably something developers can use a good amount of help for via blog posts and examples.
- Documentation updates are complete but waiting for PRs for after the code scrub
- Updated Ruby on Rails OpenID examples from Nov Matake to work under Rails 5
You can follow the status dashboard at this Google Sheet as well.
Yesterday I said the paging API was complete but needed to be reviewed. The more I talked over some elements with people and in exchanges on Diaspora I realized there were a couple of tweaks I needed to do. The first suggestion I implemented was to have paging on any endpoint that returns multiple elements. The second thing was to have a parameter for specifying the number of elements requested. I was pleased that supporting that feature was really just two lines of code to change. However while in there I decided to beef up some other defense programming techniques in some other places.
After that was done I moved on to implementing the ability to vote on polls. There was no home for it but since it is interacting with a post I put it on the Posts Interactions endpoint rather than create a dedicated endpoint with just one method. It aliases to a path in the same way as the rest of the interactions as well so I think it’s consistent. That also required a little moving things around from the existing endpoint into a service and then having both calling that. Since there were no tests around that capability I ended up writing those as well. With that done it’s time to move on to the documentation and then start hitting up the OpenID review.
- Incorporated suggestions in the paging in the API
- Completed the Poll Voting method
- Moving on to documentation updates
You can follow the status dashboard at this Google Sheet as well.
After a day of coding the paging is now in every endpoint that should have it. That means that we have paging right now for:
- Conversations (but not messages in conversations)
Because of the size of the code changes I would imagine there will at least be some tweaking and I could imagine there being some larger refactoring afterward too but it’s in a solid, working, and as performant space as the existing standard endpoints so I’m happy with it.
Now it’s on to the rest of the checklist. With the scopes being rounded out I’m going to hold off on the security review for a little while longer. The first low hanging fruit I’m working on is adding to the API Spec the ability to vote on polls. It was an oversight in the original design but it should be easy to do. I just need to decide which endpoint to add it to. After that I’m going to double back to the mundane documentation update task. At that point I think it’ll be time to go through and get up to my elbows into the OpenID code and get ready to make changes for the new scopes.
- Paging is now complete and ready for review
- Starting work on voting on polls through the API
You can follow the status dashboard at this Google Sheet as well.
Paging paging and more paging. I haven’t been committing as much time to development the last few days as I’d like. Some of that is frustration with the development process on the paging, which has been a lot of trial and error. Some of it is just how my schedule is working out too. There is progress there though. I have what I’d consider to be the rounded out API Paging infrastructure in place. It has migrated a bit since the last update since as I tried to use it I wasn’t happy with it. I’m still not happy with it but it is suitable. There will probably be some additional tweaking before final integration but what it allows is for us to have paging. I ended up wringing out design problems by wiring it into the Aspects Contacts endpoint method (to test index-based paging) and the User’s Posts endpoint (to test time-based paging). With all of that working and unit tests I’m now moving on to adding it to the rest of the endpoints. There has also been some additional discussions on the permissions scopes for the endpoint as well, and I think we’ve converged on a good final set.
- Paging API infrastructure modified to current MVP (I think) status
- Paging API now used in the Aspects Contacts and the Users Posts method
- Rounding out finishing the endpoints and updating the test harness
You can follow the status dashboard at this Google Sheet as well.
Coming up with a paging infrastructure for the API while looking at all of the ways it could be used and abused hasn’t been fun. Not that it hasn’t been totally worthwhile. I’ve actually learned a lot more about some of the nuances of how ActiveRecord and related libraries are building up their queries. I’ve thought a lot more about the nature of the queries within Diaspora too. At the same time my head is numb and for all of the effort I only got a half completed design and less than 100 lines of code across two classes, not that more lines is necessarily better.
So what we will have are two paginator types: index based and time based. The standard methods across the two are:
- page_data: returns the current page of data for passed in query
- next_page: returns information to go to the next page of data
- previous_page: returns information to go to the previous page of data
The previous/next page functions will either return a new paginator object that corresponds to the next page or it will return a string that represents query parameters that can be passed back out from a REST endpoint.
Both paginator types take a query object that will then have additional paging stuff wrapped around it. If one is doing an index-based query this is just wrapping the WillPaginate library. However if one is doing a time based query then it’s a little more complicated than that. We aren’t simply moving around indexes we actually are doing some time math. That is all coded directly in the class. The big difference between the two comes in how the ordering happens on the SQL query. In the case of both you can pass in an ordered query without throwing an error. However in the case of the IndexPaginator one probably wants to pass in their preferred order otherwise they’ll get whatever the natural order from the database is. In the case of the TimePaginator it wants to keep control over sorting by whichever time field the calling code is using. Therefore adding an additional sort could create confusing results.
Now that the paginators are done I need to add a present class that knows how to turn the query parameters into a “link” field with full URLs, per the API specification, and to update the services to call into and return the paginated data instead of their current form. I think I’ll do one that uses indexes, like contacts, followed by one that uses time, like user posts, and then start filling it out the rest of the way from there.
- Completed playing around with the base pagination classes and completed them.
- Starting to wire in first pagination into some first endpoint
Now that we’ve hit feature complete status it’s about getting more of the legwork down to get us really ready for integration. The first necessary feature we need before that is paging. As I wrote earlier, some endpoints don’t need paging and all of them technically have it as an optional thing. However to be really useful we need to have paging for several endpoints like posts, photos, conversations, et cetera. It looks like we can leverage a lot of the way we do paging in the lower levels for streams and just create a standard pager class that the API endpoints that need it can use. I’ve laid out how I want to approach that so now it’s on to implementation.
Along with the progress on the paging there has been progress on other mundane areas. All of these features were developed in side branches which needed to be reviewed and integrated into the main API branch. We are down to one endpoint left before the API branch itself is feature complete, not just having the code. All of the branches are orthogonal except for the routes.rb file and the en.yml messages file so it’s pretty easy integration but needs to be done properly. In the mean time we are also having discussions about the finer grained permission sets that apps will request and users will be notified about. So for example, an app could be given permissions to only read posts but read/write comments on posts, and so on. The endpoints already check for read/write tokens but they are broad tokens. Part of the next steps will be putting in the proper requests and making sure that the information presented to users is clear.
- All but one endpoint is integrated back into the API main branch
- Started work on the API Paging infrastructure
- Looking at the finer grained permissions for each endpoint
It seems like just a couple of years ago that Microsoft, the evil empire of the 1990s and early 200s, embraced open source and put the .NET ecosystem into the open source. It was a shocking event which was meant with some pessimism by a community that had been bitten far too many times by the old mantra “embrace, extend, extinguish” from Microsoft (not that they were unique in this mantra). It’s shocking that we are four years into this process but more shockingly is how well the .NET community is functioning. This is not an “in the open source” which is code for “you can see the code but we are the developers.” Microsoft, against all my expectations, has successfully built an open source community around open source .NET. Take a look at the pull request statistics. There is a substantial community element in most of the pieces (Chart and to read more it check out Matt Warren’s blog post on this):
If you look at the time series data Warren has created it looks even more promising. That’s not to say all is well for everyone in the .NET open source world.
As a person that tried to get back into it, to the point of polishing off SharpenNG to make it work in a post Java 7 world, I have to say that even with the improvements over the last few years the non-Windows platforms are still not first class citizens. Development for .NET sings under Visual Studio, which of course only runs on Windows. The old Xamarin Studio rebranded as Visual Studio Mac does provide a decent experience but still nothing in comparison. People on Linux on the other hand are out in the cold. Yes there are the command line tools and Visual Studio Code. That works a lot better than I expected but you can feel how clunky that development is in comparison, and MonoDevelop seems to get worse and worse as time goes on. When I think about dabbling with .NET again I think about trying Rider by JetBrains the next time. Perhaps they’ve cracked the nut. One thing I refuse to do is jump to Windows.
Related to all of that is the other elephant in the room: Microsoft doesn’t support UI development nor has any plans to on Linux. There are open source alternatives like Avalonia and Eto.NET. I know that Michael Dominic’s development shop was able to turn out a live geospatial cross platform app, Gryphon, using Avalonia so there can be some serious work done with this. Maybe because of that official blessing from Microsoft isn’t needed, especially if Rider combined with the above fits the bill. Maybe that’s the community evolving beyond Microsoft too? Still, at this stage there is a second (or in the case of Linux third) class citizenship feel about it. It’s orders of magnitude further along than I thought they would get though, which is a promising sign.
We’ve finally reached the milestone we’ve all been waiting for. With the completion of the Search API Endpoint the Diaspora API is now feature complete. That doesn’t mean that it’s ready for integration into the mainline branch. It also doesn’t mean that there isn’t more fundamental work that has to be done before it can be used on a production system. It does however mean that we can start working on rounding out some of the other fundamentals and make our way in that direction.
The first thing that I am going to work on is the paging aspect to the API. The API spec discusses paging as a thing that endpoints may or may not do. Right now there is no paging. That’s fine for some things, like getting a list of Aspects for a user. It is a requirement for something like getting a list of a user’s posts or for getting your stream. For non-developers who are reading this think of this as the piece that makes your “infinite scroll” work. Diaspora has implemented this in other areas but it will have to work a bit differently for the API. We’ve already had discussions about how we want it to work and there is a format specification for reporting it back. It therefore should be relatively straight forward to get it implemented. That is what I’m working on right now. After that we’ll want to go over all of the new code with a fine tooth comb for style and idiom consistencies (beyond the automatic style checker), security reviews, etc. Lastly we’ll want to get the OpenID authentication/authorization/etc. stuff polished up a bit. Currently the app has to be re-registered every day. That’s not going to be viable for a real user even if it is for testing.
Still, the fact we’ve reached a feature complete milestone is great news and I’m excited to be ending the weekend on that high note.
- Diaspora API is now feature complete
- Search API endpoint, unit tests, and test harness are complete
- User contacts endpoint implemented completing that endpoint
- Beginning work on paging infrastructure for API endpoints that need it
To follow along with status please see the Google Sheet Dashboard.
After the long-winded post a few days ago on the API Status the latest update is pretty brief but important:
- Notifications API endpoint, unit tests, and test harness are complete
- Work on the last endpoint (search) has begun.
The last couple of days has been a lot of heavy effort of slogging through some ever increasingly complex changes to get the API going. I started with what I thought was going to have a relatively easy time with the notifications however the deeper I went into the more I realized that I either had to come up with some relatively (for me anyway) complex queries to populate some of the return types or I have to settle for some N+1 type query behaviors. “N+1 queries” are one where you pull the results one piece at a time. That’s fine for smaller data sets, like five or ten or something, but if you are dealing with hundreds of entries you are really thrashing your system. So I got about half way through the notifications API and then put it on the shelf and moved on to the API was dreading the most: Photos.
I was really psyching myself out about having to deal with the whole image file upload part of the Photos API and then the subsequent tie in with the Posts API. It shouldn’t be that complicated but these are things I had never done in Rails or with the Kotlin Fuel framework. How would they interact? How difficult would the security checks be? You get the idea. It did take several hours of figuring out what the current controller is doing and then how I wanted to refactor the more complicated operations into a service but I got there. Once I had that I had to test the whole aspect of limited posts et cetera, which I hadn’t done as well as I had thought previously. Thankfully my Ruby unit tests were solid I just had some hiccups in my test harness.
At the end of the day we have the Photos API and the Posts API working with the photos perfectly, to the point where I was able to make a fully populated post including with an image that was uploaded externally as well. That means I’m going to jump back on the Notifications API to wrap that up and all that’s left is the Search API.
- Partial Progress on the Notifications API but shelved to figure out queries later
- Posts API is feature compleet with full tests
- Was able to create an entirely populated post with the respective images from scratch using an external application for the first time ever in Diaspora (see this post)
- 1.5 Endpoints left to go to be feature complete
After slogging away for most of today on the Photos API, with lots of needing to understand how things work and a couple more tweaks before it was ready, I decided to celebrate by showing the ultimate progress report: a screenshot. What is so special about this screenshot? It is the first post in Diaspora that has been fully made by an external application. The “external application” in this case is a test harness written in Kotlin which is designed around the API spec. This test harness first uploaded the image file, then it created the post with every feature a post can have including: location, polls, and references to other users. The post was written by a “user3” (for testing might as well stick to simple names). This is a screenshot from user1’s perspective. Notice that they also got the expected notification. Yes it’s still a bit of a ways from done but it’s still a great milestone, so I’d say it’s time to celebrate for a bit before getting back to it :).
Brief update from today on the Diaspora API development progress:
- On the Users API turns out we probably still want to have the contacts endpoint if only for the primary user since the Contacts API works on a per-aspect level the way it is mapped. Whether that method shows up in Contacts API at a different mapping or on the User itself is still TBD but it will be a change to the spec.
- The Post Interactions API is feature complete with full tests and the completed test harness.
- Work has begun on the Notifications API. This is the first change I’ve done that will require a DB migration, adding a new GUID column to notifications, so this is going to take a bit longer for me to complete as I do background research on that.
At this point it’s actually easier to look at what is left to do versus what we have done (which is a huge plus sign):
- The only two endpoints that haven’t been touched are Photos and Search. Once these are done (along with work on Notifications) the entire API spec will have been implemented.
- Implement a new poll interaction method for answering a poll through the API
- We need to implement paging on several of the endpoints. This technique will be similar to how it’s done in the core controllers but it has to be different because the return type needs to have the next/previous pages and the corresponding format needs to honor that. The actual mechanics of the queries are pretty much the same though so grafting them into the existing feature complete controllers should be relatively easy.
- Right now the OpenID integration works well enough for testing but it currently requires revalidating the app every 24 hours. This has to be tweaked to be more reasonable. There may be some refactoring in there as well.
- The Posts API Endpoint accepts any photos currently, including those that are already attached to another post. This is not consistent behavior and has to be corrected to only allow a “pending” photo to be added.
- Sweep of all of the APIs for consistency on security, service initialization (where appropriate), params parsing idioms, etc.
- Sweep through the unit tests to make sure that edge cases are covered in the same way
- Documentation updates to account for things discovered during the development (error codes added, format tweaks etc.)
It’s been two weeks since my last Diaspora API Dev Progress report but that’s not because nothing has been going on. Between the RubyConf 2018 attendance last week and this week being a holiday week there was definitely a drop off in how much development time I put into Diaspora, and therefore mostly into the API. However over that time there has been some development progress:
- All of the previous work has been successfully merged down into the main API branch.
- The Contacts API is feature complete with full tests and the completed test harness
- The Users API is feature complete with full tests and test harness with the exception of the User Contacts API method. That method was supposed to be able to return another user’s contacts if that user allowed that. However that feature no longer exists in Diaspora so I believe it is extraneous. If that’s agreed upon then this is feature complete and ready to go.
This week I should be able to apply a lot more development effort than I have been able to the past couple of weeks. Hopefully that translates into forward progress on some more endpoints. The trend seems to be that they are getting more difficult to knock out so my velocity is slowing. I guess it’s better than being stymied in the beginning.
Yesterday was the first day in several I could commit to real time towards D* again. After getting back up to speed and making the status post I went on into the API development again. I was able to make some good progress on some brand new endpoints. The first one I worked, which is the first that needed from scratch coding of the main code, was the Tag Followings controller. The day before I had struggled getting Rails to make the POST for creating tags work against the spec. However after talking it over and thinking about it it was the spec that needed changing. In another software framework I could just make it work but relying on the auto-wiring in Rails brought the design flaw nature to light. With a simple change starting yesterday real development of the Tag Followings endpoint started.
The methodology I’m using when developing the new controllers is as follows. First, I want to get the basic infrastructure in place and the tests. That means that the first phase is to write the skeleton of the controller code, the skeleton of the RSpec tests, and to wire the two together. I make sure that the routes behave the way I think they should according to the API Spec without worrying about returns etc. The skeleton of the controller should implement all routes. The skeleton of the unit tests should be testing for happy path and reasonable error conditions. So that’s stuff like: the user passes the wrong ID for a post that they are trying to comment on, or an empty new tag to follow, etc. I then go over to the external test application and code up the corresponding code in there as well. With everything running I make sure that the endpoint is reachable from the outside (which it should be), but don’t worry about returns, processing etc. If it’s possible to setup fake returns easily I do that otherwise I just ensure the proper methods are called. After all of that is coded and committed then it is off to filling in the controller method by method. For each one coded up I complete the unit tests and the external test harness interactions as well. Once that’s all done then I move on to the next one. In some cases, like Tag Followings, there needs to be refactoring elsewhere which has implications on the above flow. I usually do those pieces before coding the controller. It is at the design time that whether I should be using common code with another controller which may not exist as a Service component becomes apparent. If I need to make any changes over in other code I check that there are unit tests which properly cover the changes I am going to make, at least as best as I can tell, write those and then make the changes. This should minimize the possibility of disruption.
When interacting with Frank R. on the merge requests one of the pieces of feedback I got was that with everything compressed down to one commit it was hard to tell why I did certain things. As I code all of that is there but I’ve been rebasing everything down to one commit per endpoint so that when it comes time to merge the API branch into the main develop the log will look something like: Post API endpoint complete, Comments API endpoint complete, etc. To get around this I’m trying a new flow. When I think something is ready to be merged i’m doing a Work in Progress (WIP) Pull Request (PR). That PR has the raw commit history and the name “WIP” in the leader of the label. After a review and a thumbs up I’m going to rebase it down to one commit and then submit the final one for integration. By the time WIP is done the code is feature complete however and should be ready to be merged. I’m therefore counting WIP PR’s as the threshold for saying something is feature complete.
With all that said the three new endpoints that were feature complete as of yesterday are: Tag Followings, Aspects, and Reshares.
After a week of distractions I finally have a new update on the progress. We’ve successfully merged all the work done to date into the one main API branch and are now working on new features moving forward. The first feature we have completed with full tests and test harness interaction is the ability to manage and work with the user’s followed tags. So we have the full post lifecycle from before, and now tags done but not merged into the main branch yet.
The merging of the various side branches into the main branch is coming along. Because this isn’t being done as a primary job there is a bit of an expected delay between the pull request (PR) being generated and the branch being merged in. This is giving me the opportunity to work on other features on Diaspora though. The process is going along much faster than I expected it to, which is good. At this point we have merged the Likes, Comments, and Post Endpoints together. The PR on the Post Endpoint is now queued up however all of those changes exist in one branch. What that means is that I was able to perform a full Post life cycle test using the test harness. This means that we have an external application talking through the API and doing the following for a user:
- Creating a post
- Querying for the post and printing out it’s data
- Adding a comment to the post
- Liking to the post
- Printing out the comments and who liked the post
- Deleting their comment on a post
- Unliking a post
- Deleting a post
This is a very important step. Follow additional progress on the API Progress Google Sheet.
It’s been a few days since I’ve been able to put some real time into Diaspora development but I’m back today. Being back home from travel too means I can finally get past the blockers on the other branches. I’ve actually gotten all of the branches I had been developing on to feature complete status, with full tests, and the test harness fully coded against it. That means that through the API one can complete the entire Post, Comment, Like, etc. lifecycle for posts with all data types (regular, Photos, Polls, location, etc). Conversations are also feature complete with full test harness as well. Streams are also complete, however I haven’t tested with sufficient post volumes to test paging behavior. Now it’s going to be the trick of getting past the tech debt of getting them merged together into the API branch. Hopefully that’ll come in the next day or two. I’m going to spend some time doing other Diaspora stuff besides that as I work through those pieces as well. As always follow the progress on the API Progress Google Sheet. After the merge I’ll be moving on to the Tags Endpoint, the first endpoint that is a full from scratch development for me.
- Fully feature complete endpoints with full external test harness interaction completed are: Comments, Conversations, Likes, Posts, and Streams (except for paging behavior).
- Ready for merging of the side branches into the main API branch
Even though it was another short day on the road it was a productive day. The Conversations Endpoint’s Messages method got completed shortly after I typed up the previous day’s status message this morning. I then jumped onto the Streams API.
I’m still on the road so my contributions aren’t as great as I’d like them to be but I did manage to make some progress on the API development. At this point Conversations Endpoint minus the message listing of a conversation itself (next up). The test harness is coded up against the Conversations such that it can create, read, and hide/ignore them. As I finish up the Conversations Endpoint work and wrap up the Posts Endpoint work when I get back home I will soon be leaving the world of reviewing the existing implementation done by Frank while augmenting the tests, writing test harnesses, and making changes to get all of the tests to pass. I will then be entering the world of from scratch development on the rest of the API.
While I’m on the road I’ve been hoping to get some more work in on the API. Yesterday was a bust, and I knew it would be. Today looked like it was going to be a bust but I actually was able to get some time in tonight due to some plans that were cancelled last minute. As I sat down to start working I realized that I hadn’t been quite as prepared to develop on the road as possible. Before leaving I made sure my development laptop Ruby VM was fully configured, could compile the main code and the Kotlin test harness. I was all good to go! Except, I forgot to push my work up to the GitHub and Gitlab. Oops. Well, that derailed continuing work on the Posts API Endpoint, but with plenty more endpoints to go I started up on the Conversations endpoint, the next most filled in one to start from.
I did make a good amount of progress of fleshing out the unit tests and making some code changes to make the requests and returns on the Create method to correspond to the specification. It was at that point I realized I didn’t quite test my setup even further. I didn’t have a registered application in my OpenID setup on this dev instance. I also didn’t have the configurations I used when I set it up on my main development machine either. After some fumbling around I did manage to get it registered so I could then start testing the external test harness against the endpoint. After some final code tweaks I got that up and running and now have the test harness generating new conversations between two users! On to the rest of the conversations API tomorrow!
I’m still making good albeit slow progress on the Posts Endpoint. While the Posts Endpoint doesn’t have a lot of methods the complexity of the send and the return data is far greater than the other endpoints I’ve done so far. Posts have more than just text. They can have polls, geolocation data, mentions, aspects management, and photos. Yet posts are the core of the whole system. They are the digital elements we interact with the most. So progress on this endpoint is crucial. I’m pleased to say that at this point I’ve made enough progress with the unit tests and the test harness that from an external application I have been able to do have an external program do the full lifecycle of posting: Create a post, read a post, comment on a post, and like a post. I’m pretty stoked about that! While I have the full complement of all post data available on the GET method tested, I still have to create the test harness test methods around pushing posts with ancillary data (location, polls, mentions, photos), and need to write the unit tests for photos as well. The Photos endpoint for uploading photos during a real post creation process is a whole other matter though, but we’ll get to it soon enough!
Today I didn’t get as much progress as I had hoped on the API but still important work was done. Yesterday I discovered that something was probably off in the way that the repository rebasing was done when I did it about a week ago. Today I confirmed it. Working with Benjamin Neff (SuperTux) I was able to figure out a path forward for correcting the problem. While the git commands are pretty straight forward, me being comfortable that I’ve done it correctly is another matter so I did the process three times in a row. Each time I looked at the corresponding git log afterward and did a three way diff of the API branch head before the new rebase, the API branch head after the rebase, and the main Diaspora develop branch. I may end up doing it a fourth time (or reconfirm this last time anyway) before doing a final push after talking with Frank about it.
After getting past that I spent the other half of the time making actual progress on development. Thanks to Dennis Schubert’s (DensChub) efforts we were able to make some progress on some API questions I had. After that I made changes to the respective implementations to make it consistent. Then I went back to the Posts Endpoint testing. I completed the full GET path happy path testing for simple and fully filled in posts (text, photos, polls, mentions, and location). I now have to add in failure path testing on the GET, and the corresponding test harness methods to complete that and move on to posting and deleting Posts.
Another day another progress report on the state of the Diaspora API development. I had hoped by now that I’d be picking up a little more speed but I always underestimate how minute working on high coverage unit tests are. If I was doing a whack it together MVP startup-mode app I would always put automated tests around it for my own sanity but since things are going to change, or maybe even get thrown away entirely, in relatively short order there’s no need to go gnat’s ass down to the details. That’s not the case with the API. Yes the API is technically in a draft mode but it always looked like a really good draft. The more I code against it and use it the more I believe that’s true. Yes, my development speed is increasing as I become more familiar with all the technologies and get past some more technical hurdles but it might take the better part of a man month to finish this up (which is maybe a man-week more than I originally eyeballed).
The progress though has been steady. I had a hiccup late last night with my test harness. The Fuel HTTP library I’m using in Kotlin pushed a new release that requires the 1.30 version of Kotlin, which apparently is harder to come by than I thought. Manually setting the version fixed it all but not until after I spent half an hour fumbling around with it before giving up. Today was the deep dive into the Comments endpoint. As was the case with the previous Likes endpoint Frank’s previous work left a very solid base. Fleshing out the tests for some different errant behaviors, testing error messages as well as codes, and finding problems with the interactions once the test harness interacts with it over HTTP were the usual gremlins to squash. Still, with only two more mostly fleshed out end points to work with coming from Frank’s code base, I have a feeling that the development pace will be slowing down. Maybe I’ll have gained sufficient efficiencies in my speed of coding on all of these to make up some of that difference.
Along with the above gremlins now that it’s being interacted with I am seeing some potential small grained details that need to be discussed about the API. That’s all tracked in the issue tracker on the API documentation page though. Again, this is solid work by the team putting the API together and Frank’s initial code base that I’m starting from.
In summary progress for the day:
- Comments API Endpoint is finished and ready for pull request
- Test harness example of interacting with the Comments API is completed
- Some Issues were submitted to discuss minor changes to the status reporting back from the REST services on things like what happens when a Comment ID doesn’t match the Post ID that the REST endpoint was called with.
- Some small documentation touch ups to address navigation
Being in the early phases of getting the implementation started it was inevitable I would encounter a little extra inertia to overcome. Part of that is my own doing, but all of it is important to have confidence in what I’m developing. The easiest part was filling out the API Implementation Stoplight chart so everyone, including me, can track what is going on with the development. Then it was on to a fork in the road of sorts: do I want to start an external test harness now or wait until more is implemented. I decided for former.
While I made progress with a few hours of Diaspora API Dev yesterday it wasn’t until today that I finished my first code change towards the API: completing the Likes Endpoint.
Yep, two Diaspora API dev reports on one day. After taking a break for dinner and just watching some TV I got back to figuring out how to properly interface with the authentication and API from an external client. I was re-reading the OpenID spec, watching some videos, reading some presentations, et cetera. If I’m going to be working on the API this is something I definitely need to be deep diving into a lot more. My initial order of business however was just getting it working.
I’m only a few hours into getting fully going on the Diaspora API development project. I had been pre-flying that whole experience earlier last week by studying the existing code base, familiarizing myself with the discussion threads et cetera. Over the last couple of days I’ve been trying to focus more on moving the ball forward as well. Before really doing that though there is still a little ground work to do.
The Cambridge Analytical debacle from earlier this year and the subsquent #deletefacebook storm brought me into the alternative social media platform Diaspora. At the time, as I wrote here, I had hoped to leave the walled gardens forever. Initially I did just that but practicalities changed that forced isolation quite a bit. In some cases, like DDG, I’m still 99% using the open alternative. In others, like YouTube, I’m mostly using the old system because I just can’t get what I need out of the alternative system yet (although I still try more and more every week). However for much of it, especially on the social media side, it’s more of a mix. I’m on Diaspora as much as I’m on Facebook. I’m on Mastodon more than I’m on Twitter, but that was always a small platform for me versus my usage of Facebook. The best way to think of this blend for me is that I try to make Diaspora and Mastodon my primary platform and Facebook my secondary one, with Twitter being a distant third.
What that means practically is that I’m pretty much logged into Diaspora, Mastodon, and Facebook continuously throughout the day. The first places I’m posting to are Diaspora and Mastodon. The first places I’m checking posts is Diaspora and Mastodon. Most of the new activity from me is on Diaspora and Mastodon with manual cross posting, thanks again Facebook for screwing up your API permanently to prevent external posting, when I want to share the same thing on Facebook as well. Because I have just over 1000 friends on Facebook and almost all of them are people I’ve interacted with in real life (most mere acquaintances or met once at a social function or something) there is just a larger volume of relevant and more personally resonating posts from others I interact with. So if one were to look at my activity feeds and notifications on a given morning when I start the day you’d see tons of activity on Facebook and a little activity on Diaspora and Mastodon. Today was different.
Today the equation was reversed. Today I had more interactions to wade through on Diaspora. I had more relevant interactions to wade through at that. I had more notifications to wade through. I even got comparable engagement on my cross-posted material from late last night on all three systems. That’s the first time that’s happened since I went back to having a foot in both worlds!
Is it that I crossed a tipping point in people I’m connected to on these alternative social media systems? Is it that the influx of Google+ users have caused a spike in engagement across the systems in question? I don’t know the answer to why, and this will probably stay a noteworthy exception rather than a rule moving forward. However it can’t be a bad sign, except in one way. In the span of how long I’ve been writing this article, which is a free association lasting 15 minutes, I’ve already received almost ten notifications on Diaspora. I know that the notification controls are not as fine grained on Diaspora as they are on Facebook. It’d be a great problem to have to need to tackle that sort of feature request in the near future :).
I can’t express how happy I am that I have the privilege of having a combination of time, ability, desire, and energy to contribute substantially to the Diaspora project right now. Ever since I started using it in the spring it’s something I’ve wanted to be able to help with. I certainly got my feet wet back then on some tweaks to the Twitter and Facebook interaction code, the latter of which is permamently broken thanks to Facebook’s new API spec. With the amount of getting up to speed on Ruby, Rails, and the Diaspora code base I’m looking forward to helping tackle a much larger and persistently requested piece of code: a Diaspora API.
I’ve mostly been “microblogging” updates on Diaspora recently. That’s a fancy way of saying I haven’t been doing any in-depth writing but instead just making quick ad hoc posts on social media. As I am now ramping up my development on open source projects, primarily Diaspora by the looks of it, I’m hoping to start posting here more frequently capturing new lessons learned, observations from my exploration of these newer languages and code bases, and just getting more writing in.
Over the summer I actually spent a good deal of time exploring different cross platform development frameworks of the .NET and C++ variety. That was intended to be to work on a very niche open source project idea that I had conjured up around my classic computing hobby. By the time I made enough progress on that to the point where I could potentially be productive, although I still want to explore wxWidgets a bit more, the bug to help on alternative social media platforms bit again.
Sorry for the absence. I hope to be a regular poster again for the half dozen of you that actually read this!
Since the release of Ubuntu 18.04 I’ve been using it a bunch in various VMs. I do love the new minimal install feature. Even though it doesn’t save that much hard disk space it does make things a lot less cluttered, which I absolutely love. Because I work in VMs I’ve been experimenting with migrating OS’s up to 18.04 rather than crushing old VMs, building from scratch, and porting data over. This process has worked almost seamlessly the dozen or so times I’ve done it across many VMs from various different baselines: Mainline 16.04, Mainline 17.10, Ubuntu MATE 16.04. The actual core software itself seems to work perfectly fine out of the box, but as I said it is almost seamless not seamless. There seems to be a bit of a wrinkle with the Ubuntu MATE update with respect to the VirtualBox Guest Additions, specifically with respect to shared folder drives.
I’m now three weeks into picking up and using non-walled garden social media systems instead of traditional ones, specifically Diaspora over Facebook and Twitter. It has mostly been a good experience despite some major disagreement on some of their decisions on user experience and other rough edges that I hope to help fix soon as a contributor. But the thing that puts social media apart from blogging or other static production ecosystems is the concept of sharing and interacting with other users. By the nature of the the fact these massive digital halls are still pretty empty I’m just not getting my fill of that.
This pro-Swift article came across my RSS feed recently and while I don’t want to do a direct comparison of Swift versus Kotlin since I haven’t done Swift coding I did think it was interesting to point out similar points of efficiency in their simple example built as a product of the Kotlin language compared to others like Java, the language they picked on too.
Over the weekend I had made a bunch of progress on migrating away from the walled garden systems. I’m happy to report substantially more progress. This will of course be an ongoing process of refinement and testing. However I’m currently getting substantial amounts of my needs met in enough areas that I’m prepared now to start pulling the plugs on Facebook, the Google Ecosystem, Twitter, and so on. When I wrote about this over the weekend I had completed my hypothetical replacement of several systems. I have some updates to those elements as well though. My current replacement portfolio looks as follows (summary at the very end):
As I wrote earlier this week after the Cambridge Analytica event came to light my nagging feeling that I needed to get off these Facebook, Google, etc. platforms crossed a threshold. It was no longer something that I thought I should do but something I was going to actively do. In one week I’ve made progress in pretty much every dimension (scroll down to the bottom if you just want my list of alternatives).
I’ve had my moments in the past where Facebook pissed me off and I tried Google+. That didn’t work out too well so I went back to Facebook after they addressed some of those problems. I had my moments in the past where I was concerned about the amount of tracking Google does in searches so I went to DuckDuckGo. That’s still my main search engine but sometimes I need results that come out better in Google so go there. I also use the Google platform for my e-mail, documents, etc. The concept of them selling my data in exchange for giving me free service has bothered me to varying degrees over the years, but seeing how greedily it was manipulated recently is really amping that up to me. The amount of information available to the highest bidder has always been a known quantity to me but these recent stories are just putting that up to eleven. It’s not just the Cambridge Analytica story. There is also the story about Facebook and other companies forcing users to turn over their keys, so to speak, so they can look at any and all their personal data as a condition for working for them. There is the way they exploited that data in difficult discussions.
I almost never wait in huge lines for anything. I camped out once for football tickets in college. Once. I also once waited six hours for an iPhone 4 when it first came out. It was my first smart phone and I had been putting off getting one way too long. That was it though. Yet I know people who have waited in ever decreasing lines for each iteration of the iPhone. The reduced lines are definitely part of the sizzle wearing off and the iPhone being just another smart phone. Yet even at 8 pm last night there was a line for iPhones outside our local Apple store. It didn’t wrap around the mall like in the iPhone 4 days but the end of the first day still having a line for an iPhone 8 was pretty telling to me.
It was just a few months ago when Ubuntu announced they were killing off Unity, their main desktop option. Many people were wondering if this was part of their larger pivot towards more profitable ventures and thus they would be leaving the desktop behind. I too was filled with worry about that potential outcome but calmed myself remembering that I was not locked into one vendor for my OS any longer. In the intervening months however it has become clear that Ubuntu is not killing of the desktop, far from it. In fact the strides they are taking with Ubuntu 17.10 and Ubuntu 18.04 look like they are about to put out the strongest desktop offering to date. Not having to carry the weight of a phone platform, their own desktop environment, etc. has allowed their team to focus on giving positive contributions to Gnome proper. I’ve had the opportunity to play around with the Ubuntu 17.10 betas and have to say that I don’t think I’d be missing anything from my current Ubuntu experience. I look forward to upgrading to 18.04 when the time comes and no longer worrying about if one of my desktop baselines was going away.
On one of my classic computing Facebook Groups there was a post quoting Edsger Dijkstra stating, “It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration.” It’s actually part of a much larger document where he condemns pretty much every higher order language of the day, IBM, the cliquish nature of the computing industry, and so on. Despite most of it being the equivalent of a Twitter rant, in fact each line is almost made for tweet sized bites, there are some legitimate gems in there; one relevant to this topic being, “The tools we use have a profound (and devious!) influence on our thinking habits, and, therefore, on our thinking abilities.” No, I don’t agree with the concept that starting with BASIC, or any other language, permanently breaks someone forever, but the nature of the tools we use driving our thinking means that it can lead to requiring us to unlearn bad habits. Yet has someone tried to actually write BASIC, as in the BASIC languages of the 60s, 70s, and early 80s, with actual design principles? Fortunately/unfortunately, I tried a while ago, with some interesting results.
While I’m obviously becoming quite enamored with Kotlin recently, this is like the early dating stage for me. Everything is great when you first start dating someone but it’s after you’ve been with them for awhile and see their warts, which everything and everyone has, that you finally decide whether it’s the right fit or not.
As I wrote about here yesterday I am taking my exploration of Kotlin to the next level by looking at performance metrics using the Computer Language Benchmark Game. As of right now I’ve completed my first two steps: got the benchmark building/running Kotlin code, and doing a straight port of the suite (follow along at the Gitlab project). This was really 90% IntelliJ auto-converting followed by me fixing compiling errors (and submitting a nasty conversion bug that came up from one test back to JetBrains). So now onto the results! Well, actually not so fast on that one…
I may be enamored of my new programming toy, Kotlin, but I’m not one to go blindly into something like this. While there is a lot to love about the language I was curious how fast it was compared to Java. It’s all running in the same JVM but as I know from Scala, another JVM language, there can be dramatic performance differences. Benchmarking is the usual, and probably clichéd, way of addressing that. The Computer Language Benchmarks Game website is as good a place as any to start. Unfortunately no one has bothered making Kotlin language tests yet. Undaunted I saw this as an opportunity to contribute back as well as get a little extra Kotlin coding in. So, I’ve started a fork to develop and contribute back Kotlin versions. You can follow along and/or contribute to the port at my Gitlab project.
My approach to this endeavor is as follows:
- Update the project drivers and initialization files to get Kotlin running
- Do a straight translation port of the latest version of each of the Java benchmarks
- Compare the performance of the straight-port versions Java
- Create tweaked versions of Kotlin versions to further optimize
- Compare the performance of the tweaked versions to Java
I’ve already completed the first bullet, and the develop branch of my repo. I think the initial two bullets will go relatively quickly. Tweaking and optimizations will be another matter.
PS a big thanks to Sebastian Thiel for setting up a project/repo that is constantly mirroring the Benchmark Games’ CVS repository. It is an indispensable plus to be able to have the latest and greatest automatically (also thanks to Gitlab’s integration capabilities) as development moves forward.
Although my primary development language of recent years has been Java I have been itching to get to a more modern language. Yes, Oracle made lots of good strides with Java 8 but they are still falling woefully behind. As a former heavy .NET developer the open sourcing of C# and making it truly cross platform was my original go-to choice. You can see that in articles I wrote here and contributions I made to Sharpen to get it working under Java 8, with the new date types etc. Throughout my experiments with C# I refused to go back to Windows, and sadly while there have been great strides the bottom line is that Linux is a third rate supported platform compared to Windows and the not quite so poorly treated macOS. But what alternative do I have? The answer came with the increased news coverage, dare I say hype, around Kotlin. This was a language I looked at notionally before but now I did a deep dive and I have to say I am really liking it.
There are certain things in life that you take for granted but didn’t know you did until you didn’t have them anymore. Swagger is definitely one of them.
As the whole “what happens to Unity” thing unfolds I decided to redouble my efforts in trying different distros again. I’m trying everything from trailing edge (latest Debian) to bleeding edge (Solus). As luck would have it it was time for me to refresh one of my development VMs so I decided to jump that one from Mint to Solus to give it a real world spin. My first impressions are that it is a really interesting distro and one I’ll keep playing with but there is one not-so-tiny problem that hopefully they will grow out of.
I’m being impatient, and it’s my own fault. I started that Linux Craptop experiment to see how much mileage I could get out of a decade old laptop running a lean(ish) Linux. That actually became my only home laptop while my 6+ year old (I think) MacBook Air was getting its battery replaced. I was going to “suffer” through it for just the few days and then the MacBook would hold me over for at least another couple of years. At this point however I’m really chomping at the bit to retire that Mac and go Linux full bore.
At the beginning of January I decided to try my hand at using a ten year old laptop running Linux Mint MATE as my daily at home machine. While there is certainly some cruft associated with using such an old machine for the most part the experience was perfectly fine. In fact I’m using it right now to write out this article. I wouldn’t recommend running out and buying one solely for the purpose, but the fact remains that Linux Mint MATE, and probably Ubuntu MATE as well, provide a great average user load experience on underpowered hardware.
I’ve been a huge convert to Linux Mint and Ubuntu for several years now. In the last year I went so far as to be running Linux as my bare metal OS on both my work laptop and home desktop. I’ve never had an update for Mint or Ubuntu get so borked up that the UI refused to function properly…until now.
I was away for a week so couldn’t do my Linux craptop experiment. Sorry, but I refuse to be beholden to a ten year old laptop while on travel. So now, today, is the second day that I’m using this as my primary machine for when I’m browsing the Internet and doing things while I’m watching TV on the couch. Yes that seems like a limited subset, but I spend a good amount of time vegging in that state so it’s not as insignificant as it seems. I’ll have a thorough breakdown of my experiment at some point but by far the biggest nuisance I have that is driving me crazy is the lack of trackpad gestures.
When gestures first came out for laptops I thought they were mostly gimmicky, but once I had my first laptop that really had them I was hooked and didn’t know it. Now that I’m trying to use a laptop without them I’m finding it very cumbersome. It’s not a total loss however because this trackpad has the beginning of gestures in the form of scroll bars on the right and bottom sides. I can simulate the scrolling to some extent which is a big part of my gestures, but it really isn’t the same thing. How did we live without gestures all this time? At least Linux Mint Mate 18 supported these limited gestures out of the box for this ancient laptop.
Sometime in 2016 the Linux Action Show podcast on a yarn decided to run both a modern and then a contemporary version of Linux on ten year old equipment. As luck would have it along with my other eccentric hobbies I also have a classic computer collection. One of the computers in my collection that I ran across recently is a Dell XPS M1530 from late-2007 (specs). I bought it as not too crappy but not so great home laptop suitable for browsing the internet, doing my home finances, et cetera. Because I’m a glutton for punishment, I guess, I have decided to try to use this laptop as a modern browsing computer for a little while. With a 2.6 GHz Intel Core2 Duo and 4 GB of RAM it shouldn’t do too bad, especially with the 4 GB of RAM. I’m going to run Linux Mint MATE18.1 to give it a fighting chance. Ubuntu and Cinnamon require a bit more graphics and CPU horsepower and while the 4GB of memory should allow it to hold its own to some extent, the ten year old processors and graphics cards will suffer. MATE on the other hand is far lighter weight and more streamlined.
Probably the biggest hiccup is going to be the battery. This is the original battery from ten years ago. I doubt that it is going to hold up well to being unplugged. That’s okay though, I’ll be able to leave it plugged in while I’m using it without much inconvenience. I’m not going to make this my primary laptop or anything so if I can only use it while tethered to the couch then so be it.
I’m currently finishing up patching the system, getting printers setup, and doing software installs for things like Chrome. I look forward to playing around with this in the coming weeks and reporting on it. In fact I’m writing this very blog post in FireFox on it right now while the OS patches continue to progress…
I am very early in the Linux .NET development experiment. I am pretty busy with work and life so that I don’t have a ton of time to play around with these things. Having come from a background where most of my recent development (last several years) has been technologies other than .NET I have a double hurdle to clear: getting used to .NET and getting used to doing .NET on Linux. Therein lies the rub.
I may have cut my teeth on non-Microsoft systems but the better part of my career was spent building most of my software with and for Visual Studio. It was only in the last few years that the landscape changed and my work has been dominated by Linux, Java, and generally non-Microsoft systems. I’ve thoroughly enjoyed the explosion of open source software and the ability to contribute to and use it. I’ve also enjoyed being able to extricate myself from Windows. But with Microsoft’s recent foray into open source and with the increasing stagnation and calamities in the Java community I’ve decided to give the .NET stack a while again, but with a twist.
Over on Slashdot there is an article about an IP saga of sorts between Wix and the makers of WordPress. While the Slashdot title accuses Wix of “stealing” code, not even WordPress’s Matt Mullenweg accused him of that in the original post. What happened is pretty simple. The Wix engineers decided to wrap a WordPress rich text control so it would work well with React Native. The Wix engineers made that project under an MIT license and then dutifully used it in their proprietary iOS application. The WordPress control they wrapped was licensed under GPL, and that is where the problem is.
With the release of the latest MacBook Pro’s Apple has finally returned to some semblance of modernity with their product line in the laptop regime. They have left their desktop line to languish at least for another six months though. That makes my recent purchase of a Hackintosh Rig (that I admittedly still happily run Linux on without even considering the need to go back to OSX) seem like an even better idea. Even with my embrace of Linux I still would have kicked myself if a dream iMac came forward, but thankfully nope! Which brings me back to the latest laptops. They are obviously a welcome upgrade to a laptop line that time forgot. They have some very neat features. They have the usual Apple Tax, in this case about $400 for a comparably priced Dell and about the same dollar price for a much better System 76 laptop. But Apple has far better battery life than either of those two machines ever would.
Is it a great upgrade? Yes. Is it worth the money? Probably/maybe/depends. Is it something I’m dying to buy? No. At this point none of the Apple laptop offerings are drawing me in. My MacBook Air mostly gets the job done, even if it’s starting to show it’s five year age. But the processor isn’t the thing that’s killing me, it’s mostly the memory limit when I try to run VMs. So to spend $1500-$1800 just to fix that problem seems outrageous. At this point I’m going to go with my original plan: play around with my seven year old Dell running Linux and then give a System 76 laptop for a whirl.
I’ve been prepping for potentially jumping from iPhone to Android for my personal phone. I’m getting sick of the quality of iOS and apps going down. I’m getting sick of vendor lock. My problem with vendor lock has a lot to do with a feeling that I’m not in control of my data. Based on what I’ve been reading, and this article on TechCrunch it seems like the problem is becoming far more exacerbated on Android with the Google platform. I could already see it with the latest service offerings that Google has been pitching with the new Pixel. As I played around with the Google apps it seems they were at least as wonky if not more so and on top of that they seem to be far more invasive about how they deal with your data. They also seem to do a lot more of the insipid “opting out” versus “opting in” problem that I see on iOS. While I may be buying into a supposedly more open platform, would I be doing it at the expense of my own data control? Do I need to look further than Android to Ubuntu phones or something? SMH.
A few years ago after yet another one of those hacker scares of compromised browsers and operating systems I decided to get a bit dramatic and stop working primarily on my computer’s host operating system and instead run everything I could inside of virtual machines. VirtualBox has always been my tried and true technology, but in recent months it has suffered a huge plague of major stability problems across all of my host operating systems. These are problems I’ve never had under VirtualBox 4. The 3D drivers seem to get more and more unstable with each subsequent upgrade of Windows or MacOS. Chrome/Chromium/Electron applications that used to run okay now are display artifact hell. With the latest batch of updates audio drivers keep failing, as well as the 3D drivers.
My experimentation in fitness has taken a back seat these past few years. My hardcore experimentation really has fallen more into hitting a point of being unsatisfied with where I am and then clawing back for a bit. While I intend to continue to write on that topic as the desire strikes me, the topic of experimentation that has been occupying my time recently has been my good old computers/software engineering. I’m adding a section to this blog specifically for this, and changing the format around to accommodate that. I look forward to getting all of these thoughts out of my head and onto “paper”.