Software Engineering
Getting Started With AvalonStudio

In my write up on Avalonia first impressions one of the things I was most missing was a full IntelliSense/Auto-complete style system for the Avalonia XAML and the ability to create components in the IDE on Linux. “The IDE” in both of those cases was the one I had focused on which was JetBrain’s Rider IDE. When I originally wrote the article I thought there was no alternative so I’d have to use a workflow akin to doing it in a pure text editor. However as I was in the middle of editing I discovered that the Avalonia developers are actually working on a full IDE built with Avalonia called AvalonStudio. While it is in heavy development in beta and missing some key features because if it I have to say it is impressive how many features it has and how well it works already. Could it replace Rider for .NET development right now? No. Could it eventually? Maybe, but that’s not why it is interesting to me. It’s interesting to me because it has the facilities I sorely wanted in Rider but found missing: XAML IntelliSense (and preview!) and Avalonia component creation with proper namespace behaviors. So how does one go about running it?


Avalonia First Impressions

I’ve documented the first forays into doing cross-platform .NET development with Avalonia. I’ve stated that I’m overall impressed. However what are my deeper thoughts on it?


Avalonia ToDo Tutorial (On Linux)

As I wrote in my Avalonia Hello World (On Linux) article I’ve made more progress than just executing the canned auto-generated Hello World. I’ve actually been through their one official tutorial and then some. You can find it on their website here. It will walk you through the steps of making a simple proof of concept “To Do List” application which shows you all of the steps of creating a simple application, adding controls, creating reactive controls, and how the Avalonia System works. It has two paths. One for those using Visual Studio on Windows and another for those using the .NET Core command line tools. Since I’m sticking with the whole doing everything on Linux thing I’m using the latter.


Avalonia Hello World (On Linux)

As I pick up doing cross-platform application desktop application development using AvaloniaUI I need to go through the obligatory hurdles of the “Hello World” program and following tutorials. I figure why not document them here for others too. Fortunately they actually provide some pretty solid getting started and tutorial guidelines so this should be more considered my personal notebook of those.


Self Hosting Without Self Owning

In my 2018 attempts at ditching the walled gardens I made a bunch of progress, which I’ve since backslid from, on replacing Google services with Kolab Lab’s offerings. I want to have the benefits of GMail, Google Drive, etc. but I would rather not have Google owning all of my data. At the same time I’m not going to fall on my sword and go back to mid-1990s infrastructure a la Richard Stallman either. Self hosting these things is a daunting task which I considered to be way out of the reach of this software developer. I thought that anyway until I ran across the FOSDEM session on the YunoHost system, which makes self-hosting a much more out of the box experience. Could this be the solution to my problem?


Development Ramp Up

After several months of dormancy in my software development activities I’ve started hitting a solid pace of getting back into the swing of things recently. As much as I wanted the next big thing for me to work on to be something Fediverse related, specifically Friendica, that has created a huge mental block for me. I wrote about that a lot in this post. I’m not a language snob, more on that below, but getting fired up about doing PHP work on that project isn’t happening. I still never got to the bottom of if it was more PHP or the inertia of getting started on the project. It doesn’t matter either way because I wasn’t getting anything done. I wasn’t sure if maybe it was a general lull. I think I’ve answered that question in the negative. So what is this looking like then?


My Contribution Conundrum

I took the deep dive into the Fediverse last year when I decided to bite off the Diaspora API development task with Frank Rousseau. It was a great experience and I had hoped to do a lot more Diaspora work. With a lot of the ActivityPub discussions and there being some really good questions about how that should work I had embarked on an experiment to see what a merged Fediverse Social Media experience would feel like. Friendica has tie-ins to Diaspora, ActivityPub, and many others. It was a great candidate for it. I am way behind on doing my write up but I have my notes. That’s for another post. This post is about a conundrum I’m facing with respect to my open source/Fediverse contribution conundrum. That conundrum is: I don’t know which project(s) I want to focus on any longer.


Integrating With the Greater Fediverse

I remember the first time I had to integrate myself into a new community. It was right after college. I had started my first job which was in a new specialization of my industry. I had to come to grips with a life transition, learning how to work with a new team and new software, learning about the ins and outs of the industry around me and those interactions, et cetera. It is a very unsettling position to have orders of magnitude more things to learn than time to do it. No one expects someone to pick it all up instantly but in me there is a drive to “come up to speed” as fast as possible. When it comes to contributing to the Fediverse I am feeling the exact same thing right now.


Diaspora API Real World Usage: A Blog Discussion Timeline

“Dogfooding” software is one of the best ways to wring out any problems with a design or implementation. The Diaspora API was designed with a wide variety of uses in mind including something potentially as grand as being the replacement backend for a revamped website. With the actual API now “in the can” and waiting for the real PR review I decided to try to use the API for an actual purpose and start dogfooding it. I had several ideas but the first one I decided to latch on to was a blog discussion timeline feature.


Diaspora API Dev Progress Report 30

We’ve finally done it! Frank and I were able to get the last of our internal reviews done and the API code is now in the “real” code review for integration into the main Diaspora development branch. That alone is an amazing thing but I have a second piece of big news related to the API as well. Today I was able to stand up a first version of a blog “Discussion Browser” that uses the API to pull all comments and other interactions for a blog post that is associated with a specific Diaspora post. I’m going to be doing a write up of that in more detail later but as a first cut it worked pretty well and showed that the API design and the code itself is functioning pretty well.


Rant: WTF Spring Boot?

Some people just can’t leave well enough alone, I swear! When last I left Spring Boot world everything was going great. The project bootstrapping was pretty straight forward. The documentation pretty much matched the actual behaviors. The actual behaviors were pretty well laid out. Today I tried to create a project from scratch. Between fighting Java version hell from the online generator, to fighting gradle dependency hell both there and in IntelliJ, to then wrestling with some new fucked up syntax for something as simple as reading in the configuration file I have wasted two hours and gotten absolutely fucking nowhere!


Diaspora API Interactions Part 2: The First "Real" Interaction

I was so excited when I finally got a real pod interacting with the API that I knew I’d have to get it written down before I could get to sleep. However before dropping right to the interactions itself I decided to take some time describing how a piece of software would be allowed to do anything with a server. In Part1 I laid all of those details out to get across some very important points:

  • We are using a standard (OpenID/OAuth2) protocol for doing this
  • Users have to give explicit permissions to an application, including being told what it is and is not asking to do
  • There are security measures once an application is granted permissions as well.

This article essentially details the very first communications and gives people a feel for what the Diaspora API specification looks like in practice not just in theory.


Diaspora API Interactions Part 1: Authentication

Okay I’m obviously over excited about the fact that something which I knew should work actually did work. However all the previous API usages were on servers on the local machine, not behind an HTTPS link, and not being shared with the rest of the fediverse. This one breaks through that barrier. I have therefore decided to document it in excruciating detail. For the first pass all of these interactions were manual using cURL and FireFox RESTClient plugin. The next step, which will be coming up very shortly, will be creating the very first server to use this for a real purpose (I’ll document that as that happens). This document goes over the nitty gritty details of the whole authentication piece. The next article will go into the calls themselves. If you don’t care about the nuances of the authentication steps then just skim or skip this and go to the Part 2. So without further ado, here we go…


Diaspora API Dev Progress Report 29

As we begin to wrap up the year we also are beginning to wrap up the API getting ready for the “real” pull request for the API code. We are down to one last code review of the final clean up pass before we have it looked at by the core team. I think the code is pretty solid but it will of course have problems that are discovered during the review and the testing. Ah the testing, real world testing that we really need to do. To get there we need to have a test server. Thankfully that’s all taken care of now and we’ve had the first data interactions with a pod.


Facebook Data Export Hidden Pitfalls

As I get more and more fed up with Facebook while also getting more and more embedded into the Fediverse I’ve been considering the whole #deletefacebook campaign again. I turned off Facebook earlier but never deleted it. As the new year approaches the thought of shutting it down appealed to me but then I went a step further thinking I should just blow away all of my data as well. There are lots of posts with lots of data and lots of associations that I want to keep though. Thankfully Facebook provides a mechanism for extracting your data. Unfortunately if you assume all of your data is there you’ll probably be wrong.


First Jekyll Post

I’ve been blogging on Wordpress since 2013. For a long time I had wanted to blog and tried LiveJournal and sites like that. It wasn’t until 2013 when I was deciding to embark on a personal fitness experiment that I finally bit the bullet and created the N=1 blog. The original premise was exploring the whole area of Quantified Self and longevity for my own purposes. It was going to kicked off by a grand experiment of living various different fitness lifestyles for periods of time to see if any made a dramatic difference, positive or negative. I never really got too far into that experiment. Then the blog became my ramblings on the topic. Over time I had less interest in that and more in software engineering. Rather than create a whole new blog I decided to just add new categories. As the boundaries of what I wanted to post became less clear it really just became my public journal on all topics interesting to me.


Diaspora API Dev Progress Report 28

Today is a momentous day in the Diaspora API development saga.  Today we have completed primary development of the API, the unit tests, and the external test harness.  There are still two code reviews between that and the real code review for integration into the main development branch, but all of the major work is complete.  What does that mean exactly?


Diaspora API Dev Progress Report 27

Boy are we really coming down the home stretch now!  All of the scopes are implemented in every API endpoint now with their corresponding tests to confirm that the permissions are working correctly.  The most difficult of those, I thought, was the Streams, again.  After beating my head against a rock a lot yesterday I put the whole project down for the day and then picked it up today.  After warming up on the other endpoints I started working my way through getting Streams working such that it could filter private data.  After a bit of fumbling I finally got a relatively simple solution to the problem and got all the tests passing correctly.


Diaspora API Dev Progress Report 26

It’s been almost a week since there’s been an update on the API.  I’ve been busy with other things and travel so it didn’t get as much focus as I would have liked to have given it.  However there has been some progress.  Thanks to Frank’s help we’ve been able to get all of the side branches merged into the core API branch so that we are now coming down the home stretch on getting it ready for integration.  The first order of business for that is getting the OpenID security stuff squared away.  I’m still working on understanding that better and the more I go back to it and work with it here the better that looks.  There is still the question of the "refresh token" workflow but work has been done on it so if anything it’s a small tweak thing or a documentation thing versus a from scratch development thing.  Even in the event that it was a from scratch thing with the code base I have and the examples I mentioned before it shouldn’t be a huge effort to get that working.  Most of the security work is therefore integrating in the much more fine grained security scopes which Senya has been working to hone.


Facebook Messenger Alternatives Redux (Federated and Non-Federated)

While I post mostly on Fediverse platforms like Diaspora and Mastodon, and am focusing my development efforts there, the instant messaging accessibility of Facebook Messenger has been illusive. I tried Wickr and it’s okay but not the most user friendly. It’s claim to fame is the messages only go from the participant to the receiver without a server between except for authentication. That makes the flow clunky, to say the least.  Which is why leaving Facebook has been one of my least successful aspects.    I wanted to explore other options and in the last week I pulled the trigger and actually did it.


Diaspora API Dev Progress Report 25

With the documentation changes wrapped up, but holding off on PR’s until things solidify up a bit more from the code scrub process, it was time to move on to the OpenID deep dive and review.  Up until now I’ve been working with an authorization workflow that required me to request a new token ever 24 hours and for the user to authenticate it.  I wasn’t sure how much of that was because of the flow I chose or intrinsic to how it was coded up.  As I continued to go over the OpenID documentation and other articles on the process I just couldn’t get it working.  It was then clear to me that what I needed was an example to help me.

Luckily Nov Matake created some example projects to go along with his OpenID gems, one for the OpenID Connect Provider (the server side) and one for the OpenID Relying Party (the app side).  I figured with that everything would be good to go.  After all this was the same code he had running up on Heroku but I wanted to see the nitty gritty details and set it up on both sides since I was going to need to do that with Diaspora and the test harness, or any other API use case I may be interested in.  As I had come to find out quickly these projects have never been updated.  They still rely on old versions of Ruby and Rails.  Instead of trying to downshift everything to these versions I decided to fork the projects and get them running under Ruby 2.4+ and Rails 5.  Unfortunately that derailed my entire Diaspora development effort for the day. The upside is that the community will have modern versions of these projects to use.  I intend to polish them up a little more and then issue a PR back to the original project.  My versions however can be found on my GitHub profile with the Connection Provider here and the Relying Party here.

In the process of doing these upgrades I was able to learn a lot more about porting Ruby code up from older versions.  I also got a much better understanding of some OpenID flows.  I’m going to use that to continue to move forward on the review of the implementation in the API and looking at client side implementation details.  Because of the complexity of that whole process I think that’s probably something developers can use a good amount of help for via blog posts and examples.

In Summary:

  • Documentation updates are complete but waiting for PRs for after the code scrub
  • Updated Ruby on Rails OpenID examples from Nov Matake to work under Rails 5

You can follow the status dashboard at this Google Sheet as well.


Diaspora API Dev Progress Report 24

Yesterday I said the paging API was complete but needed to be reviewed.  The more I talked over some elements with people and in exchanges on Diaspora I realized there were a couple of tweaks I needed to do.  The first suggestion I implemented was to have paging on any endpoint that returns multiple elements.  The second thing was to have a parameter for specifying the number of elements requested.  I was pleased that supporting that feature was really just two lines of code to change.  However while in there I decided to beef up some other defense programming techniques in some other places.

After that was done I moved on to implementing the ability to vote on polls.  There was no home for it but since it is interacting with a post I put it on the Posts Interactions endpoint rather than create a dedicated endpoint with just one method.  It aliases to a path in the same way as the rest of the interactions as well so I think it’s consistent.  That also required a little moving things around from the existing endpoint into a service and then having both calling that.  Since there were no tests around that capability I ended up writing those as well.  With that done it’s time to move on to the documentation and then start hitting up the OpenID review.

In Summary:

  • Incorporated suggestions in the paging in the API
  • Completed the Poll Voting method
  • Moving on to documentation updates

You can follow the status dashboard at this Google Sheet as well.


Diaspora API Dev Progress Report 23

After a day of coding the paging is now in every endpoint that should have it.  That means that we have paging right now for:

  • Contacts
  • Photos
  • Posts
  • Comments
  • Notifications
  • Conversations (but not messages in conversations)
  • Search
  • Streams

Because of the size of the code changes I would imagine there will at least be some tweaking and I could imagine there being some larger refactoring afterward too but it’s in a solid, working, and as performant space as the existing standard endpoints so I’m happy with it.

Now it’s on to the rest of the checklist.  With the scopes being rounded out I’m going to hold off on the security review for a little while longer.  The first low hanging fruit I’m working on is adding to the API Spec the ability to vote on polls.  It was an oversight in the original design but it should be easy to do.  I just need to decide which endpoint to add it to.  After that I’m going to double back to the mundane documentation update task.  At that point I think it’ll be time to go through and get up to my elbows into the OpenID code and get ready to make changes for the new scopes.

In Summary:

  • Paging is now complete and ready for review
  • Starting work on voting on polls through the API

You can follow the status dashboard at this Google Sheet as well.


Diaspora API Dev Progress Report 22

Paging paging and more paging.  I haven’t been committing as much time to development the last few days as I’d like.  Some of that is frustration with the development process on the paging, which has been a lot of trial and error.  Some of it is just how my schedule is working out too.  There is progress there though.  I have what I’d consider to be the rounded out API Paging infrastructure in place.  It has migrated a bit since the last update since as I tried to use it I wasn’t happy with it.  I’m still not happy with it but it is suitable.   There will probably be some additional tweaking before final integration but what it allows is for us to have paging.  I ended up wringing out design problems by wiring it into the Aspects Contacts endpoint method (to test index-based paging) and the User’s Posts endpoint (to test time-based paging).  With all of that working and unit tests I’m now moving on to adding it to the rest of the endpoints. There has also been some additional discussions on the permissions scopes for the endpoint as well, and I think we’ve converged on a good final set.

In Summary:

  • Paging API infrastructure modified to current MVP (I think) status
  • Paging API now used in the Aspects Contacts and the Users Posts method
  • Rounding out finishing the endpoints and updating the test harness

You can follow the status dashboard at this Google Sheet as well.


Diaspora API Dev Progress Report 21

Coming up with a paging infrastructure for the API while looking at all of the ways it could be used and abused hasn’t been fun.  Not that it hasn’t been totally worthwhile.  I’ve actually learned a lot more about some of the nuances of how ActiveRecord and related libraries are building up their queries. I’ve thought a lot more about the nature of the queries within Diaspora too.  At the same time my head is numb and for all of the effort I only got a half completed design and less than 100 lines of code across two classes, not that more lines is necessarily better.

So what we will have are two paginator types: index based and time based. The standard methods across the two are:

  • page_data: returns the current page of data for passed in query
  • next_page: returns information to go to the next page of data
  • previous_page: returns information to go to the previous page of data

The previous/next page functions will either return a new paginator object that corresponds to the next page or it will return a string that represents query parameters that can be passed back out from a REST endpoint.

Both paginator types take a query object that will then have additional paging stuff wrapped around it.  If one is doing an index-based query this is just wrapping the WillPaginate library.  However if one is doing a time based query then it’s a little more complicated than that.  We aren’t simply moving around indexes we actually are doing some time math.  That is all coded directly in the class.  The big difference between the two comes in how the ordering happens on the SQL query.  In the case of both you can pass in an ordered query without throwing an error.  However in the case of the IndexPaginator one probably wants to pass in their preferred order otherwise they’ll get whatever the natural order from the database is.  In the case of the TimePaginator it wants to keep control over sorting by whichever time field the calling code is using.  Therefore adding an additional sort could create confusing results.

Now that the paginators are done I need to add a present class that knows how to turn the query parameters into a “link” field with full URLs, per the API specification, and to update the services to call into and return the paginated data instead of their current form.  I think I’ll do one that uses indexes, like contacts, followed by one that uses time, like user posts, and then start filling it out the rest of the way from there.

In summary:

  • Completed playing around with the base pagination classes and completed them.
  • Starting to wire in first pagination into some first endpoint

Diaspora API Dev Progress Report 20

Now that we’ve hit feature complete status it’s about getting more of the legwork down to get us really ready for integration.  The first necessary feature we need before that is paging.  As I wrote earlier, some endpoints don’t need paging and all of them technically have it as an optional thing.  However to be really useful we need to have paging for several endpoints like posts, photos, conversations, et cetera.  It looks like we can leverage a lot of the way we do paging in the lower levels for streams and just create a standard pager class that the API endpoints that need it can use.  I’ve laid out how I want to approach that so now it’s on to implementation.

Along with the progress on the paging there has been progress on other mundane areas.  All of these features were developed in side branches which needed to be reviewed and integrated into the main API branch.  We are down to one endpoint left before the API branch itself is feature complete, not just having the code.  All of the branches are orthogonal except for the routes.rb file and the en.yml messages file so it’s pretty easy integration but needs to be done properly.  In the mean time we are also having discussions about the finer grained permission sets that apps will request and users will be notified about.  So for example, an app could be given permissions to only read posts but read/write comments on posts, and so on.  The endpoints already check for read/write tokens but they are broad tokens.  Part of the next steps will be putting in the proper requests and making sure that the information presented to users is clear.

In summary:

  • All but one endpoint is integrated back into the API main branch
  • Started work on the API Paging infrastructure
  • Looking at the finer grained permissions for each endpoint

Open Source Dot-Net is 4 years old and going strong!

It seems like just a couple of years ago that Microsoft, the evil empire of the 1990s and early 200s, embraced open source and put the .NET ecosystem into the open source.  It was a shocking event which was meant with some pessimism by a community that had been bitten far too many times by the old mantra “embrace, extend, extinguish” from Microsoft (not that they were unique in this mantra).  It’s shocking that we are four years into this process but more shockingly is how well the .NET community is functioning.  This is not an “in the open source” which is code for “you can see the code but we are the developers.”  Microsoft, against all my expectations, has successfully built an open source community around open source .NET.  Take a look at the pull request statistics.  There is a substantial community element in most of the pieces (Chart and to read more it check out Matt Warren’s blog post on this):

If you look at the time series data he Warren has created it looks even more promising.  That’s not to say all is well for everyone in the .NET open source world.

As a person that tried to get back into it, to the point of polishing off SharpenNG to make it work in a post Java 7 world, I have to say that even with the improvements over the last few years the non-Windows platforms are still not first class citizens.  Development for .NET sings under Visual Studio, which of course only runs on Windows.  The old Xamarin Studio rebranded as Visual Studio Mac does provide a decent experience but still nothing in comparison.  People on Linux on the other hand are out in the cold.  Yes there are the command line tools and Visual Studio Code.  That works a lot better than I expected but you can feel how clunky that development is in comparison, and MonoDevelop seems to get worse and worse as time goes on.  When I think about dabbling with .NET again I think about trying Rider by JetBrains the next time.  Perhaps they’ve cracked the nut.  One thing I refuse to do is jump to Windows.

Related to all of that is the other elephant in the room: Microsoft doesn’t support UI development nor has any plans to on Linux.  There are open source alternatives like Avalonia and Eto.NET.  I know that Michael Dominic’s development shop was able to turn out a live geospatial cross platform app, Gryphon, using Avalonia so there can be some serious work done with this.  Maybe because of that official blessing from Microsoft isn’t needed, especially if Rider combined with the above fits the bill.  Maybe that’s the community evolving beyond Microsoft too?  Still, at this stage there is a second (or in the case of Linux third) class citizenship feel about it.  It’s orders of magnitude further along than I thought they would get though, which is a promising sign.


Diaspora API Dev Progress Report 19

We’ve finally reached the milestone we’ve all been waiting for.  With the completion of the Search API Endpoint the Diaspora API is now feature complete.  That doesn’t mean that it’s ready for integration into the mainline branch.  It also doesn’t mean that there isn’t more fundamental work that has to be done before it can be used on a production system.  It does however mean that we can start working on rounding out some of the other fundamentals and make our way in that direction.

The first thing that I am going to work on is the paging aspect to the API.  The API spec discusses paging as a thing that endpoints may or may not do.  Right now there is no paging.  That’s fine for some things, like getting a list of Aspects for a user.  It is a requirement for something like getting a list of a user’s posts or for getting your stream.  For non-developers who are reading this think of this as the piece that makes your “infinite scroll” work.  Diaspora has implemented this in other areas but it will have to work a bit differently for the API.  We’ve already had discussions about how we want it to work and there is a format specification for reporting it back.  It therefore should be relatively straight forward to get it implemented.  That is what I’m working on right now.  After that we’ll want to go over all of the new code with a fine tooth comb for style and idiom consistencies (beyond the automatic style checker), security reviews, etc.  Lastly we’ll want to get the OpenID authentication/authorization/etc. stuff polished up a bit.  Currently the app has to be re-registered every day.  That’s not going to be viable for a real user even if it is for testing.

Still, the fact we’ve reached a feature complete milestone is great news and I’m excited to be ending the weekend on that high note.

In summary:

  • Diaspora API is now feature complete
  • Search API endpoint, unit tests, and test harness are complete
  • User contacts endpoint implemented completing that endpoint
  • Beginning work on paging infrastructure for API endpoints that need it

To follow along with status please see the Google Sheet Dashboard.


Diaspora API Dev Progress Report 18

After the long-winded post a few days ago on the API Status the latest update is pretty brief but important:

  • Notifications API endpoint, unit tests, and test harness are complete
  • Work on the last endpoint (search) has begun.

Diaspora API Dev Progress Report 17

The last couple of days has been a lot of heavy effort of slogging through some ever increasingly complex changes to get the API going.  I started with what I thought was going to have a relatively easy time with the notifications however the deeper I went into the more I realized that I either had to come up with some relatively (for me anyway) complex queries to populate some of the return types or I have to settle for some N+1 type query behaviors.  “N+1 queries” are one where you pull the results one piece at a time.  That’s fine for smaller data sets, like five or ten or something, but if you are dealing with hundreds of entries you are really thrashing your system.   So I got about half way through the notifications API and then put it on the shelf and moved on to the API was dreading the most: Photos.

I was really psyching myself out about having to deal with the whole image file upload part of the Photos API and then the subsequent tie in with the Posts API.  It shouldn’t be that complicated but these are things I had never done in Rails or with the Kotlin Fuel framework.  How would they interact?  How difficult would the security checks be?  You get the idea.  It did take several hours of figuring out what the current controller is doing and then how I wanted to refactor the more complicated operations into a service but I got there.  Once I had that I had to test the whole aspect of limited posts et cetera, which I hadn’t done as well as I had thought previously.  Thankfully my Ruby unit tests were solid I just had some hiccups in my test harness.

At the end of the day we have the Photos API and the Posts API working with the photos perfectly, to the point where I was able to make a fully populated post including with an image that was uploaded externally as well.  That means I’m going to jump back on the Notifications API to wrap that up and all that’s left is the Search API.

In summary:

  • Partial Progress on the Notifications API but shelved to figure out queries later
  • Posts API is feature compleet with full tests
  • Was able to create an entirely populated post with the respective images from scratch using an external application for the first time ever in Diaspora (see this post)
  • 1.5 Endpoints left to go to be feature complete

Diaspora API First: A Full Externally Created Post

After slogging away for most of today on the Photos API, with lots of needing to understand how things work and a couple more tweaks before it was ready, I decided to celebrate by showing the ultimate progress report: a screenshot.  What is so special about this screenshot?  It is the first post in Diaspora that has been fully made by an external application.  The “external application” in this case is a test harness written in Kotlin which is designed around the API spec.  This test harness first uploaded the image file, then it created the post with every feature a post can have including: location, polls, and references to other users.  The post was written by a “user3” (for testing might as well stick to simple names).  This is a screenshot from user1’s perspective.  Notice that they also got the expected notification.  Yes it’s still a bit of a ways from done but it’s still a great milestone, so I’d say it’s time to celebrate for a bit before getting back to it :).


Diaspora API Dev Progress Report 16

Brief update from today on the Diaspora API development progress:

  • On the Users API turns out we probably still want to have the contacts endpoint if only for the primary user since the Contacts API works on a per-aspect level the way it is mapped.  Whether that method shows up in Contacts API at a different mapping or on the User itself is still TBD but it will be a change to the spec.
  • The Post Interactions API is feature complete with full tests and the completed test harness.
  • Work has begun on the Notifications API.  This is the first change I’ve done that will require a DB migration, adding a new GUID column to notifications, so this is going to take a bit longer for me to complete as I do background research on that.

At this point it’s actually easier to look at what is left to do versus what we have done (which is a huge plus sign):

  • The only two endpoints that haven’t been touched are Photos and Search. Once these are done (along with work on Notifications) the entire API spec will have been implemented.
  • Implement a new poll interaction method for answering a poll through the API
  • We need to implement paging on several of the endpoints.  This technique will be similar to how it’s done in the core controllers but it has to be different because the return type needs to have the next/previous pages and the corresponding format needs to honor that.  The actual mechanics of the queries are pretty much the same though so grafting them into the existing feature complete controllers should be relatively easy.
  • Right now the OpenID integration works well enough for testing but it currently requires revalidating the app every 24 hours.  This has to be tweaked to be more reasonable.  There may be some refactoring in there as well.
  • The Posts API Endpoint accepts any photos currently, including those that are already attached to another post.  This is not consistent behavior and has to be corrected to only allow a “pending” photo to be added.
  • Sweep of all of the APIs for consistency on security, service initialization (where appropriate), params parsing idioms, etc.
  • Sweep through the unit tests to make sure that edge cases are covered in the same way
  • Documentation updates to account for things discovered during the development (error codes added, format tweaks etc.)

Diaspora API Dev Progress Report 15

It’s been two weeks since my last Diaspora API Dev Progress report but that’s not because nothing has been going on.  Between the RubyConf 2018 attendance last week and this week being a holiday week there was definitely a drop off in how much development time I put into Diaspora, and therefore mostly into the API.  However over that time there has been some development progress:

  • All of the previous work has been successfully merged down into the main API branch.
  • The Contacts API is feature complete with full tests and the completed test harness
  • The Users API is feature complete with full tests and test harness with the exception of the User Contacts API method.  That method was supposed to be able to return another user’s contacts if that user allowed that.  However that feature no longer exists in Diaspora so I believe it is extraneous.  If that’s agreed upon then this is feature complete and ready to go.

This week I should be able to apply a lot more development effort than I have been able to the past couple of weeks.  Hopefully that translates into forward progress on some more endpoints.  The trend seems to be that they are getting more difficult to knock out so my velocity is slowing.  I guess it’s better than being stymied in the beginning.


MacBook or XPS Linux Ultrabook…looks like a Mac after all

I have had two laptops for most of the twenty years: a personal laptop and a work laptop.  Before I owned my own company that was a question of the company’s I worked for policies.  While I had my own company it was about living by the same rules that applied to everyone else in the company (I’m a big fan of dogfooding anything I do).  Now that I’m on the individual consulting/developer bandwagon I’m in the same boat.  I have a pretty decent System76 Linux laptop that’s a couple years old but pretty bulky.  I have a positively ancient 2011 MacBook Air.  Disk space and speed wise it is fine.  Memory wise at 4 GB it’s starting to get a little cramped if I have too many Google Drive tabs open and the like.  Processor wise though it is a dog.  It’s at the point now where some sites like Facebook and Gmail can take tens of seconds to complete rendering.  At least they allow interactions while they finish parsing their JavaScript etc.


Diaspora API Dev Progress Report 14

Yesterday was the first day in several I could commit to real time towards D* again.  After getting back up to speed and making the status post I went on into the API development again.  I was able to make some good progress on some brand new endpoints.  The first one I worked, which is the first that needed from scratch coding of the main code, was the Tag Followings controller.  The day before I had struggled getting Rails to make the POST for creating tags work against the spec.  However after talking it over and thinking about it it was the spec that needed changing.  In another software framework I could just make it work but relying on the auto-wiring in Rails brought the design flaw nature to light.  With a simple change starting yesterday real development of the Tag Followings endpoint started.

The methodology I’m using when developing the new controllers is as follows.  First, I want to get the basic infrastructure in place and the tests.  That means that the first phase is to write the skeleton of the controller code, the skeleton of the RSpec tests, and to wire the two together.  I make sure that the routes behave the way I think they should according to the API Spec without worrying about returns etc.  The skeleton of the controller should implement all routes.  The skeleton of the unit tests should be testing for happy path and reasonable error conditions.  So that’s stuff like: the user passes the wrong ID for a post that they are trying to comment on, or an empty new tag to follow, etc.  I then go over to the external test application and code up the corresponding code in there as well.  With everything running I make sure that the endpoint is reachable from the outside (which it should be), but don’t worry about returns, processing etc.  If it’s possible to setup fake returns easily I do that otherwise I just ensure the proper methods are called.  After all of that is coded and committed then it is off to filling in the controller method by method.  For each one coded up I complete the unit tests and the external test harness interactions as well.  Once that’s all done then I move on to the next one.  In some cases, like Tag Followings, there needs to be refactoring elsewhere which has implications on the above flow.  I usually do those pieces before coding the controller.  It is at the design time that whether I should be using common code with another controller which may not exist as a Service component becomes apparent.  If I need to make any changes over  in other code I check that there are unit tests which properly cover the changes I am going to make, at least as best as I can tell, write those and then make the changes.  This should minimize the possibility of disruption.

When interacting with Frank R. on the merge requests one of the pieces of feedback I got was that with everything compressed down to one commit it was hard to tell why I did certain things.  As I code all of that is there but I’ve been rebasing everything down to one commit per endpoint so that when it comes time to merge the API branch into the main develop the log will look something like: Post API endpoint complete, Comments API endpoint complete, etc.  To get around this I’m trying a new flow.  When I think something is ready to be merged i’m doing a Work in Progress (WIP) Pull Request (PR).  That PR has the raw commit history and the name “WIP” in the leader of the label.  After a review and a thumbs up I’m going to rebase it down to one commit and then submit the final one for integration.  By the time WIP is done the code is feature complete however and should be ready to be merged.  I’m therefore counting WIP PR’s as the threshold for saying something is feature complete.

With all that said the three new endpoints that were feature complete as of yesterday are: Tag Followings, Aspects, and Reshares.


Diaspora API Dev Progress Report 13

After a week of distractions I finally have a new update on the progress.  We’ve successfully merged all the work done to date into the one main API branch and are now working on new features moving forward.  The first feature we have completed with full tests and test harness interaction is the ability to manage and work with the user’s followed tags.  So we have the full post lifecycle from before, and now tags done but not merged into the main branch yet.


Diaspora API Dev Progress Report 12

The merging of the various side branches into the main branch is coming along.  Because this isn’t being done as a primary job there is a bit of an expected delay between the pull request (PR) being generated and the branch being merged in.  This is giving me the opportunity to work on other features on Diaspora though.  The process is going along much faster than I expected it to, which is good.  At this point we have merged the Likes, Comments, and Post Endpoints together.  The PR on the Post Endpoint is now queued up however all of those changes exist in one branch.  What that means is that I was able to perform a full Post life cycle test using the test harness.  This means that we have an external application talking through the API and doing the following for a user:

  1. Creating a post
  2. Querying for the post and printing out it’s data
  3. Adding a comment to the post
  4. Liking to the post
  5. Printing out the comments and who liked the post
  6. Deleting their comment on a post
  7. Unliking a post
  8. Deleting a post

This is a very important step. Follow additional progress on the API Progress Google Sheet.


Diaspora API Dev Progress Report 11

It’s been a few days since I’ve been able to put some real time into Diaspora development but I’m back today.   Being back home from travel too means I can finally get past the blockers on the other branches.  I’ve actually gotten all of the branches I had been developing on to feature complete status, with full tests, and the test harness fully coded against it.  That means that through the API one can complete the entire Post, Comment, Like, etc. lifecycle for posts with all data types (regular, Photos, Polls, location, etc).  Conversations are also feature complete with full test harness as well.  Streams are also complete, however I haven’t tested with sufficient post volumes to test paging behavior.  Now it’s going to be the trick of getting past the tech debt of getting them merged together into the API branch.  Hopefully that’ll come in the next day or two.  I’m going to spend some time doing other Diaspora stuff besides that as I work through those pieces as well.  As always follow the progress on the API Progress Google Sheet.  After the merge I’ll be moving on to the Tags Endpoint, the first endpoint that is a full from scratch development for me.

In Summary:

  • Fully feature complete endpoints with full external test harness interaction completed are: Comments, Conversations, Likes, Posts, and Streams (except for paging behavior).
  • Ready for merging of the side branches into the main API branch

Diaspora API Dev Progress Report 10

Even though it was another short day on the road it was a productive day.  The Conversations Endpoint’s Messages method got completed shortly after I typed up the previous day’s status message this morning.  I then jumped onto the Streams API.


Diaspora API Dev Progress Report 9

I’m still on the road so my contributions aren’t as great as I’d like them to be but I did manage to make some progress on the API development.  At this point Conversations Endpoint minus the message listing of a conversation itself (next up).  The test harness is coded up against the Conversations such that it can create, read, and hide/ignore them.   As I finish up the Conversations Endpoint work and wrap up the Posts Endpoint work when I get back home I will soon be leaving the world of reviewing the existing implementation done by Frank while augmenting the tests, writing test harnesses, and making changes to get all of the tests to pass.  I will then be entering the world of from scratch development on the rest of the API.


Diaspora API Dev Progress Report 8

While I’m on the road I’ve been hoping to get some more work in on the API.  Yesterday was a bust, and I knew it would be.  Today looked like it was going to be a bust but I actually was able to get some time in tonight due to some plans that were cancelled last minute.  As I sat down to start working I realized that I hadn’t been quite as prepared to develop on the road as possible.  Before leaving I made sure my development laptop Ruby VM was fully configured, could compile the main code and the Kotlin test harness.  I was all good to go!  Except, I forgot to push my work up to the GitHub and Gitlab.  Oops.  Well, that derailed continuing work on the Posts API Endpoint, but with plenty more endpoints to go I started up on the Conversations endpoint, the next most filled in one to start from.

I did make a good amount of progress of fleshing out the unit tests and making some code changes to make the requests and returns on the Create method to correspond to the specification.  It was at that point I realized I didn’t quite test my setup even further.  I didn’t have a registered application in my OpenID setup on this dev instance.  I also didn’t have the configurations I used when I set it up on my main development machine either.  After some fumbling around I did manage to get it registered so I could then start testing the external test harness against the endpoint.  After some final code tweaks I got that up and running and now have the test harness generating new conversations between two users!  On to the rest of the conversations API tomorrow!


Diaspora API Dev Progress Report 7

I’m still making good albeit slow progress on the Posts Endpoint.  While the Posts Endpoint doesn’t have a lot of methods the complexity of the send and the return data is far greater than the other endpoints I’ve done so far.  Posts have more than just text.  They can have polls, geolocation data, mentions, aspects management, and photos.  Yet posts are the core of the whole system.  They are the digital elements we interact with the most.  So progress on this endpoint is crucial.  I’m pleased to say that at this point I’ve made enough progress with the unit tests and the test harness that from an external application I have been able to do have an external program do the full lifecycle of posting: Create a post, read a post, comment on a post, and like a post.  I’m pretty stoked about that!  While I have the full complement of all post data available on the GET method tested, I still have to create the test harness test methods around pushing posts with ancillary data (location, polls, mentions, photos), and need to write the unit tests for photos as well.  The Photos endpoint for uploading photos during a real post creation process is a whole other matter though, but we’ll get to it soon enough!


Diaspora API Dev Progress Report 6

Today I didn’t get as much progress as I had hoped on the API but still important work was done.   Yesterday I discovered that something was probably off in the way that the repository rebasing was done when I did it about a week ago.  Today I confirmed it.  Working with Benjamin Neff (SuperTux) I was able to figure out a path forward for correcting the problem.  While the git commands are pretty straight forward, me being comfortable that I’ve done it correctly is another matter so I did the process three times in a row. Each time I looked at the corresponding git log afterward and did a three way diff of the API branch head before the new rebase, the API branch head after the rebase, and the main Diaspora develop branch.  I may end up doing it a fourth time (or reconfirm this last time anyway) before doing a final push after talking with Frank about it.

After getting past that I spent the other half of the time making actual progress on development.  Thanks to Dennis Schubert’s (DensChub) efforts we were able to make some progress on some API questions I had.  After that I made changes to the respective implementations to make it consistent.  Then I went back to the Posts Endpoint testing.  I completed the full GET path happy path testing for simple and fully filled in posts (text, photos, polls, mentions, and location).  I now have to add in failure path testing on the GET, and the corresponding test harness methods to complete that and move on to posting and deleting Posts.


Diaspora API Dev Progress Report 5

Another day another progress report on the state of the Diaspora API development.  I had hoped by now that I’d be picking up a little more speed but I always underestimate how minute working on high coverage unit tests are.  If I was doing a whack it together MVP startup-mode app I would always put automated tests around it for my own sanity but since things are going to change, or maybe even get thrown away entirely, in relatively short order there’s no need to go gnat’s ass down to the details.  That’s not the case with the API.  Yes the API is technically in a draft mode but it always looked like a really good draft.  The more I code against it and use it the more I believe that’s true.  Yes, my development speed is increasing as I become more familiar with all the technologies and get past some more technical hurdles but it might take the better part of a man month to finish this up (which is maybe a man-week more than I originally eyeballed).

The progress though has been steady.  I had a hiccup late last night with my test harness.  The Fuel HTTP library I’m using in Kotlin pushed a new release that requires the 1.30 version of Kotlin, which apparently is harder to come by than I thought.  Manually setting the version fixed it all but not until after I spent half an hour fumbling around with it before giving up.  Today was the deep dive into the Comments endpoint.  As was the case with the previous Likes endpoint Frank’s previous work left a very solid base.  Fleshing out the tests for some different errant behaviors, testing error messages as well as codes, and finding problems with the interactions once the test harness interacts with it over HTTP were the usual gremlins to squash.  Still, with only two more mostly fleshed out end points to work with coming from Frank’s code base, I have a feeling that the development pace will be slowing down.  Maybe I’ll have gained sufficient efficiencies in my speed of coding on all of these to make up some of that difference.

Along with the above gremlins now that it’s being interacted with I am seeing some potential small grained details that need to be discussed about the API.  That’s all tracked in the issue tracker on the API documentation page though.   Again, this is solid work by the team putting the API together and Frank’s initial code base that I’m starting from.

In summary progress for the day:

  • Comments API Endpoint is finished and ready for pull request
  • Test harness example of interacting with the Comments API is completed
  • Some Issues were submitted to discuss minor changes to the status reporting back from the REST services on things like what happens when a Comment ID doesn’t match the Post ID that the REST endpoint was called with.
  • Some small documentation touch ups to address navigation

Diaspora API Dev Progress Report 4

Being in the early phases of getting the implementation started it was inevitable I would encounter a little extra inertia to overcome.  Part of that is my own doing, but all of it is important to have confidence in what I’m developing.  The easiest part was filling out the API Implementation Stoplight chart so everyone, including me, can track what is going on with the development.  Then it was on to a fork in the road of sorts: do I want to start an external test harness now or wait until more is implemented.  I decided for former. 


Diaspora API Dev Progress Report 3

While I made progress with a few hours of Diaspora API Dev yesterday it wasn’t until today that I finished my first code change towards the API: completing the Likes Endpoint.


Diaspora API Dev Progress Report 2

Yep, two Diaspora API dev reports on one day.  After taking a break for dinner and just watching some TV I got back to figuring out how to properly interface with the authentication and API from an external client.  I was re-reading the OpenID spec, watching some videos, reading some presentations, et cetera.  If I’m going to be working on the API this is something I definitely need to be deep diving into a lot more.  My initial order of business however was just getting it working.


Diaspora API Dev Progress Report 1

I’m only a few hours into getting fully going on the Diaspora API development project.  I had been pre-flying that whole experience earlier last week by studying the existing code base, familiarizing myself with the discussion threads et cetera.  Over the last couple of days I’ve been trying to focus more on moving the ball forward as well.  Before really doing that though there is still a little ground work to do.


Milestone: Higher Responses on Diaspora instead of Facebook

The Cambridge Analytical debacle from earlier this year and the subsquent #deletefacebook storm brought me into the alternative social media platform Diaspora.  At the time, as I wrote here, I had hoped to leave the walled gardens forever.  Initially I did just that but practicalities changed that forced isolation quite a bit.  In some cases, like DDG, I’m still 99% using the open alternative.  In others, like YouTube, I’m mostly using the old system because I just can’t get what I need out of the alternative system yet (although I still try more and more every week).  However for much of it, especially on the social media side, it’s more of a mix.  I’m on Diaspora as much as I’m on Facebook.  I’m on Mastodon more than I’m on Twitter, but that was always a small platform for me versus my usage of Facebook.  The best way to think of this blend for me is that I try to make Diaspora and Mastodon my primary platform and Facebook my secondary one, with Twitter being a distant third.

What that means practically is that I’m pretty much logged into Diaspora, Mastodon, and Facebook continuously throughout the day.  The first places I’m posting to are Diaspora and Mastodon.  The first places I’m checking posts is Diaspora and Mastodon.  Most of the new activity from me is on Diaspora and Mastodon with manual cross posting, thanks again Facebook for screwing up your API permanently to prevent external posting,  when I want to share the same thing on Facebook as well.  Because I have  just over 1000 friends on Facebook and almost all of them are people I’ve interacted with in real life (most mere acquaintances or met once at a social function or something) there is just a larger volume of relevant and more personally resonating posts from others I interact with.  So if one were to look at my activity feeds and notifications on a given morning when I start the day you’d see tons of activity on Facebook and a little activity on Diaspora and Mastodon.  Today was different.

Today the equation was reversed.  Today I had more interactions to wade through on Diaspora.  I had more relevant interactions to wade through at that.  I had more notifications to wade through.  I even got comparable engagement on my cross-posted material from late last night on all three systems.  That’s the first time that’s happened since I went back to having a foot in both worlds!

Is it that I crossed a tipping point in people I’m connected to on these alternative social media systems?  Is it that the influx of Google+ users have caused a spike in engagement across the systems in question?  I don’t know the answer to why, and this will probably stay a noteworthy exception rather than a rule moving forward.  However it can’t be a bad sign, except in one way.  In the span of how long I’ve been writing this article, which is a free association lasting 15 minutes, I’ve already received almost ten notifications on Diaspora.  I know that the notification controls are not as fine grained on Diaspora as they are on Facebook.  It’d be a great problem to have to need to tackle that sort of feature request in the near future :).


Let the Diaspora API Deep Dive Begin!

I can’t express how happy I am that I have the privilege of having a combination of time, ability, desire, and energy to contribute substantially to the Diaspora project right now.  Ever since I started using it in the spring it’s something I’ve wanted to be able to help with.  I certainly got my feet wet back then on some tweaks to the Twitter and Facebook interaction code, the latter of which is permamently broken thanks to Facebook’s new API spec.  With the amount of getting up to speed on Ruby, Rails, and the Diaspora code base I’m looking forward to helping tackle a much larger and persistently requested piece of code: a Diaspora API. 


Ramping Up Open Source Development Time

I’ve mostly been “microblogging” updates on Diaspora recently.  That’s a fancy way of saying I haven’t been doing any in-depth writing but instead just making quick ad hoc posts on social media.  As I am now ramping up my development on open source projects, primarily Diaspora by the looks of it, I’m hoping to start posting here more frequently capturing new lessons learned, observations from  my exploration of these newer languages and code bases, and just getting more writing in.

Over the summer I actually spent a good deal of time exploring different cross platform development frameworks of the .NET and C++ variety.  That was intended to be to work on a very niche open source project idea that I had conjured up around my classic computing hobby.  By the time I made enough progress on that to the point where I could potentially be productive, although I still want to explore wxWidgets a bit more, the bug to help on alternative social media platforms bit again.

So while I’ve been pining away for the opportunity to really start getting into Kotlin, JavaFX, and other technologies, my current path is taking my down the jewel crusted path of Ruby, Ruby on Rails, and JavaScript.  These are the technologies that Diaspora is built upon.  In fact, as I’ve written before elsewhere, I’m really enjoying the language a lot.  RubyMine could use a bit of polish compared to how well it works for Java and Kotlin but it’s at least on par with the CLion C++ and Rider .NET IDEs.  Yes, I’ve fully converted over to being a JetBrains user nowadays, even paying for a full license to the entire suite.  To people who know me the fact I converted over is probably going to be a bit of a shock.  To the casual reader coming here from my non-software interests they have no idea what we are talking about, but IDEs are very personal decisions and we get wedded to them pretty hard.

Sorry for the absence.  I hope to be a regular poster again for the half dozen of you that actually read this!

 


Ubuntu MATE 16.04 to 18.04 Upgrade Hiccup on VirtualBox Guest Additions

Since the release of Ubuntu 18.04 I’ve been using it a bunch in various VMs.  I do love the new minimal install feature.  Even though it doesn’t save that much hard disk space it does make things a lot less  cluttered, which I absolutely love.  Because I work in VMs I’ve been experimenting with migrating OS’s up to 18.04 rather than crushing old VMs, building from scratch, and porting data over.  This process has worked almost seamlessly the dozen or so times I’ve done it across many VMs from various different baselines: Mainline 16.04, Mainline 17.10, Ubuntu MATE 16.04.  The actual core software itself seems to work perfectly fine out of the box, but as I said it is almost seamless not seamless.  There seems to be a bit of a wrinkle with the Ubuntu MATE update with respect to the VirtualBox Guest Additions, specifically with respect to shared folder drives.


Adjusting to not-as-social social media alternatives

I’m now three weeks into picking up and using non-walled garden social media systems instead of traditional ones, specifically Diaspora over Facebook and Twitter.  It has mostly been a good experience despite some major disagreement on some of their decisions on user experience and other rough edges that I hope to help fix soon as a contributor.  But the thing that puts social media apart from blogging or other static production ecosystems is the concept of sharing and interacting with other users.  By the nature of the the fact these massive digital halls are still pretty empty I’m just not getting my fill of that.


Kotlin Compactness Example from Swift Blog Article

This pro-Swift article came across my RSS feed recently and while I don’t want to do a direct comparison of Swift versus Kotlin since I haven’t done Swift coding I did think it was interesting to point out similar points of efficiency in their simple example built as a product of the Kotlin language compared to others like Java, the language they picked on too.


Completing leaving the user data selling walled gardens

Over the weekend I had made a bunch of progress on migrating away from the walled garden systems.   I’m happy to report substantially more progress.  This will of course be an ongoing process of refinement and testing.  However I’m currently getting substantial amounts of my needs met in enough areas that I’m prepared now to start pulling the plugs on Facebook, the Google Ecosystem, Twitter, and so on.  When I wrote about this over the weekend I had completed my hypothetical replacement of several systems.  I have some updates to those elements as well though.   My current replacement portfolio looks as follows (summary at the very end):


Progress on leaving the user data selling walled gardens

As I wrote earlier this week after the Cambridge Analytica event came to light my nagging feeling that I needed to get off these Facebook, Google, etc. platforms crossed a threshold.  It was no longer something that I thought I should do but something I was going to actively do.  In one week I’ve made progress in pretty much every dimension (scroll down to the bottom if you just want my list of alternatives).


Replacing Facebook/Google Et Al With Open Platforms

I’ve had my moments in the past where Facebook pissed me off and I tried Google+.  That didn’t work out too well so I went back to Facebook after they addressed some of those problems.  I had my moments in the past where I was concerned about the amount of tracking Google does in searches so I went to DuckDuckGo.  That’s still my main search engine but sometimes I need results that come out better in Google so go there.  I also use the Google platform for my e-mail, documents, etc.    The concept of them selling my data in exchange for giving me free service has bothered me to varying degrees over the years, but seeing how greedily it was manipulated recently is really amping that up to me.  The amount of information available to the highest bidder has always been a known quantity to me but these recent stories are just putting that up to eleven.   It’s not just the Cambridge Analytica story.  There is also the story about Facebook and other companies forcing users to turn over their keys, so to speak, so they can look at any and all their personal data as a condition for working for them.  There is the way they exploited that data in difficult discussions.


iPhone Excitement Isn’t Just For the Fanboys

I almost never wait in huge lines for anything.  I camped out once for football tickets in college.  Once.  I also once waited six hours for an iPhone 4 when it first came out.  It was my first smart phone and I had been putting off getting one way too long.  That was it though.  Yet I know people who have waited in ever decreasing lines for each iteration of the iPhone.  The reduced lines are definitely part of the sizzle wearing off and the iPhone being just another smart phone.  Yet even at 8 pm last night there was a line for iPhones outside our local Apple store.  It didn’t wrap around the mall like in the iPhone 4 days but the end of the first day still having a line for an iPhone 8 was pretty telling to me.


The Death of Ubuntu Desktop Was Greatly Exaggerated

It was just a few months ago when Ubuntu announced they were killing off Unity, their main desktop option.  Many people were wondering if this was part of their larger pivot towards more profitable ventures and thus they would be leaving the desktop behind.  I too was filled with worry about that potential outcome but calmed myself remembering that I was not locked into one vendor for my OS any longer.  In the intervening months however it has become clear that Ubuntu is not killing of the desktop, far from it.  In fact the strides they are taking with Ubuntu 17.10 and Ubuntu 18.04 look like they are about to put out the strongest desktop offering to date.  Not having to carry the weight of a phone platform, their own desktop environment, etc. has allowed their team to focus on giving positive contributions to Gnome proper.  I’ve had the opportunity to play around with the Ubuntu 17.10 betas and have to say that I don’t think I’d be missing anything from my current Ubuntu experience.  I look forward to upgrading to 18.04 when the time comes and no longer worrying about if one of my desktop baselines was going away.


Applying "Good" Programming To Old BASIC

On one of my classic computing Facebook Groups there was a post quoting Edsger Dijkstra stating, “It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration.”  It’s actually part of a much larger document where he condemns pretty much every higher order language of the day, IBM, the cliquish nature of the computing industry, and so on. Despite most of it being the equivalent of a Twitter rant, in fact each line is almost made for tweet sized bites, there are some legitimate gems in there; one relevant to this topic being, “The tools we use have a profound (and devious!) influence on our thinking habits, and, therefore, on our thinking abilities.”  No, I don’t agree with the concept that starting with BASIC, or any other language, permanently breaks someone forever, but the nature of the tools we use driving our thinking means that it can lead to requiring us to unlearn bad habits.  Yet has someone tried to actually write BASIC, as in the BASIC languages of the 60s, 70s, and early 80s, with actual design principles?  Fortunately/unfortunately, I tried a while ago, with some interesting results.


More Kotlin Homework To Do

While I’m obviously becoming quite enamored with Kotlin recently, this is like the early dating stage for me.  Everything is great when you first start dating someone but it’s after you’ve been with them for awhile and see their warts, which everything and everyone has, that you finally decide whether it’s the right fit or not. 


Kotlin Benchmark Initial Porting Complete…First Impressions Only

As I wrote about here yesterday I am taking my exploration of Kotlin to the next level by looking at performance metrics using the Computer Language Benchmark Game.  As of right now I’ve completed my first two steps: got the benchmark building/running Kotlin code, and doing a straight port of the suite (follow along at the Gitlab project).  This was really 90% IntelliJ auto-converting followed by me fixing compiling errors (and submitting a nasty conversion bug that came up from one test back to JetBrains).  So now onto the results! Well, actually not so fast on that one…


Kotlin Performance Benchmarks

I may be enamored of my new programming toy, Kotlin, but I’m not one to go blindly into something like this.  While there is a lot to love about the language I was curious how fast it was compared to Java.  It’s all running in the same JVM but as I know from Scala, another JVM language, there can be dramatic performance differences.  Benchmarking is the usual, and probably clichéd, way of addressing that.  The Computer Language Benchmarks Game website is as good a place as any to start. Unfortunately no one has bothered making Kotlin language tests yet.  Undaunted I saw this as an opportunity to contribute back as well as get a little extra Kotlin coding in.  So, I’ve started a fork to develop and contribute back Kotlin versions.  You can follow along and/or contribute to the port at my Gitlab project.

My approach to this endeavor is as follows:

  • Update the project drivers and initialization files to get Kotlin running
  • Do a straight translation port of the latest version of each of the Java benchmarks
  • Compare the performance of the straight-port versions Java
  • Create tweaked versions of Kotlin versions to further optimize
  • Compare the performance of the tweaked versions to Java

I’ve already completed the first bullet, and the develop branch of my repo.  I think the initial two bullets will go relatively quickly.  Tweaking and optimizations will be another matter.

PS a big thanks to Sebastian Thiel for setting up a project/repo that is constantly mirroring the Benchmark Games’ CVS repository. It is an indispensable plus to be able to have the latest and greatest automatically (also thanks to Gitlab’s integration capabilities) as development moves forward.


I’ve Caught the Kotlin Bug…

Although my primary development language of recent years has been Java I have been itching to get to a more modern language.  Yes, Oracle made lots of good strides with Java 8 but they are still falling woefully behind.  As a former heavy .NET developer the open sourcing of C# and making it truly cross platform was my original go-to choice.  You can see that in articles I wrote here and contributions I made to Sharpen to get it working under Java 8, with the new date types etc.  Throughout my experiments with C# I refused to go back to Windows, and sadly while there have been great strides the bottom line is that Linux is a third rate supported platform compared to Windows and the not quite so poorly treated macOS.  But what alternative do I have?  The answer came with the increased news coverage, dare I say hype, around Kotlin.  This was a language I looked at notionally before but now I did a deep dive and I have to say I am really liking it.


Swagger Annotations coming to NancyFx

There are certain things in life that you take for granted but didn’t know you did until you didn’t have them anymore.  Swagger is definitely one of them.


Solus Is Solid One Week In (Minus One Thing)

As the whole “what happens to Unity” thing unfolds I decided to redouble my efforts in trying different distros again.  I’m trying everything from trailing edge (latest Debian) to bleeding edge (Solus).  As luck would have it it was time for me to refresh one of my development VMs so I decided to jump that one from Mint to Solus to give it a real world spin.  My first impressions are that it is a really interesting distro and one I’ll keep playing with but there is one not-so-tiny problem that hopefully they will grow out of.


I Want My Linux Laptop Now! (A Voluntary Simplicity Exercise)

I’m being impatient, and it’s my own fault.  I started that Linux Craptop experiment to see how much mileage I could get out of a decade old laptop running a lean(ish) Linux.  That actually became my only home laptop while my 6+ year old (I think) MacBook Air was getting its battery replaced.  I was going to “suffer” through it for just the few days and then the MacBook would hold me over for at least another couple of years.  At this point however I’m really chomping at the bit to retire that Mac and go Linux full bore.


Yes, you can survive with a ten year old laptop running Mint MATE

At the beginning of January I decided to try my hand at using a ten year old laptop running Linux Mint MATE as my daily at home machine. While there is certainly some cruft associated with using such an old machine for the most part the experience was perfectly fine.  In fact I’m using it right now to write out this article.  I wouldn’t recommend running out and buying one solely for the purpose, but the fact remains that Linux Mint MATE, and probably Ubuntu MATE as well, provide a great average user load experience on underpowered hardware.


VS Code Saved My Linux Mint VM

I’ve been a huge convert to Linux Mint and Ubuntu for several years now.  In the last year I went so far as to be running Linux as my bare metal OS on both my work laptop and home desktop.  I’ve never had an update for Mint or Ubuntu get so borked up that the UI refused to function properly…until now.


What’s missing most from my Linux Craptop? Gestures

I was away for a week so couldn’t do my Linux craptop experiment.  Sorry, but I refuse to be beholden to a ten year old laptop while on travel.  So now, today, is the second day that I’m using this as my primary machine for when I’m browsing the Internet and doing things while I’m watching TV on the couch.  Yes that seems like a limited subset, but I spend a good amount of time vegging in that state so it’s not as insignificant as it seems.  I’ll have a thorough breakdown of my experiment at some point but by far the biggest nuisance I have that is driving me crazy is the lack of trackpad gestures.

When gestures first came out for laptops I thought they were mostly gimmicky, but once I had my first laptop that really had them I was hooked and didn’t know it.  Now that I’m trying to use a laptop without them I’m finding it very cumbersome.  It’s not a total loss however because this trackpad has the beginning of gestures in the form of scroll bars on the right and bottom sides.  I can simulate the scrolling to some extent which is a big part of my gestures, but it really isn’t the same thing. How did we live without gestures all this time? At least Linux Mint Mate 18 supported these limited gestures out of the box for this ancient laptop.


Ancient Craptop Linux Experiment

Sometime in 2016 the Linux Action Show podcast on a yarn decided to run both a modern and then a contemporary version of Linux on ten year old equipment.  As luck would have it along with my other eccentric hobbies I also have a classic computer collection.  One of the computers in my collection that I ran across recently is a Dell XPS M1530 from late-2007 (specs).  I bought it as not too crappy but not so great home laptop suitable for browsing the internet, doing my home finances, et cetera.  Because I’m a glutton for punishment, I guess, I have decided to try to use this laptop as a modern browsing computer for a little while.  With a 2.6 GHz Intel Core2 Duo and 4 GB of RAM it shouldn’t do too bad, especially with the 4 GB of RAM.  I’m going to run Linux Mint MATE18.1 to give it a fighting chance.  Ubuntu and Cinnamon require a bit more graphics and CPU horsepower and while the 4GB of memory should allow it to hold its own to some extent, the ten year old processors and graphics cards will suffer.  MATE on the other hand is far lighter weight and more streamlined.

Probably the biggest hiccup is going to be the battery.  This is the original battery from ten years ago.  I doubt that it is going to hold up well to being unplugged.  That’s okay though, I’ll be able to leave it plugged in while I’m using it without much inconvenience.  I’m not going to make this my primary laptop or anything so if I can only use it while tethered to the couch then so be it.

I’m currently finishing up patching the system, getting printers setup, and doing software installs for things like Chrome.  I look forward to playing around with this in the coming weeks and reporting on it.  In fact I’m writing this very blog post in FireFox on it right now while the OS patches continue to progress…


Non-Windows .NET is Still Second Class Citizen

I am very early in the Linux .NET development experiment.  I am pretty busy with work and life so that I don’t have a ton of time to play around with these things.  Having come from a background where most of my recent development (last several years) has been technologies other than .NET I have a double hurdle to clear: getting used to .NET and getting used to doing .NET on Linux.  Therein lies the rub.


.NET on Linux–An Experiment

I may have cut my teeth on non-Microsoft systems but the better part of my career was spent building most of my software with and for Visual Studio.  It was only in the last few years that the landscape changed and my work has been dominated by Linux, Java, and generally non-Microsoft systems.  I’ve thoroughly enjoyed the explosion of open source software and the ability to contribute to and use it.  I’ve also enjoyed being able to extricate myself from Windows.  But with Microsoft’s recent foray into open source and with the increasing stagnation and calamities in the Java community I’ve decided to give the .NET stack a while again, but with a twist.


Wix/WordPress Argument Shows Viral Nature of GPL

Over on Slashdot there is an article about an IP saga of sorts between Wix and the makers of WordPress.  While the Slashdot title accuses Wix of “stealing” code, not even WordPress’s Matt Mullenweg accused him of that in the original post.  What happened is pretty simple.  The Wix engineers decided to wrap a WordPress rich text control so it would work well with React Native.  The Wix engineers made that project under an MIT license and then dutifully used it in their proprietary iOS application.  The WordPress control they wrapped was licensed under GPL, and that is where the problem is.


MacBook Pro’s, Way Improved But Meh

With the release of the latest MacBook Pro’s Apple has finally returned to some semblance of modernity with their product line in the laptop regime.  They have left their desktop line to languish at least for another six months though.  That makes my recent purchase of a Hackintosh Rig (that I admittedly still happily run Linux on without even considering the need to go back to OSX) seem like an even better idea.  Even with my embrace of Linux I still would have kicked myself if a dream iMac came forward, but thankfully nope!  Which brings me back to the latest laptops.  They are obviously a welcome upgrade to a laptop line that time forgot.  They have some very neat features.  They have the usual Apple Tax, in this case about $400 for a comparably priced Dell and about the same dollar price for a much better System 76 laptop.  But Apple has far better battery life than either of those two machines ever would.

Is it a great upgrade? Yes.  Is it worth the money?  Probably/maybe/depends.  Is it something I’m dying to buy?  No.  At this point none of the Apple laptop offerings are drawing me in.  My MacBook Air mostly gets the job done, even if it’s starting to show it’s five year age.  But the processor isn’t the thing that’s killing me, it’s mostly the memory limit when I try to run VMs.  So to spend $1500-$1800 just to fix that problem seems outrageous. At this point I’m going to go with my original plan: play around with my seven year old Dell running Linux and then give a System 76 laptop for a whirl.


Google Experiment Over Before It Begins?

I’ve been prepping for potentially jumping from iPhone to Android for my personal phone.  I’m getting sick of the quality of iOS and apps going down.  I’m getting sick of vendor lock.  My problem with vendor lock has a lot to do with a feeling that I’m not in control of my data.  Based on what I’ve been reading, and this article on TechCrunch it seems like the problem is becoming far more exacerbated on Android with the Google platform.   I could already see it with the latest service offerings that Google has been pitching with the new Pixel.  As I played around with the Google apps it seems they were at least as wonky if not more so and on top of that they seem to be far more invasive about how they deal with your data.  They also seem to do a lot more of the insipid “opting out” versus “opting in” problem that I see on iOS.  While I may be buying into a supposedly more open platform, would I be doing it at the expense of my own data control?  Do I need to look further than Android to Ubuntu phones or something? SMH.


Fork VirtualBox to revive the 4.x line?

A few years ago after yet another one of those hacker scares of compromised browsers and operating systems I decided to get a bit dramatic and stop working primarily on my computer’s host operating system and instead run everything I could inside of virtual machines.  VirtualBox has always been my tried and true technology, but in recent months it has suffered a huge plague of major stability problems across all of my host operating systems.  These are problems I’ve never had under VirtualBox 4.  The 3D drivers seem to get more and more unstable with each subsequent upgrade of Windows or MacOS.  Chrome/Chromium/Electron applications that used to run okay now are display artifact hell.  With the latest batch of updates audio drivers keep failing, as well as the 3D drivers.


A Shift Away From Health Experimentation

My experimentation in fitness has taken a back seat these past few years.  My hardcore experimentation really has fallen more into hitting a point of being unsatisfied with where I am and then clawing back for a bit.  While I intend to continue to write on that topic as the desire strikes me, the topic of experimentation that has been occupying my time recently has been my good old computers/software engineering.  I’m adding a section to this blog specifically for this, and changing the format around to accommodate that.  I look forward to getting all of these thoughts out of my head and onto “paper”.


Picture of Me (Hank)

Categories

Updates (124)
Journal (115)
Daily Updates (84)
Software Engineering (78)
Commentary (66)
Methodology (57)

Archive

2019
2018
2017
2016
2015
2014
2013