New Blog (Again)

Written by J. David Smith
Published on 08 September 2015

Last time I updated my blog, I was using Clojure static site generation built on top of stasis. This served me quite well. Over the past week or so, I've been hacking on a new blog (and also have a new site in progress that will be up after I move to Florida). This blog is put together using hexo and Tufte CSS.

Why?

My old blog engine worked well for the most part. The major wrench in the works was actually the dependency on Emacs as an exporter. It prevented me from doing a lot of fancier things with the content because templating by string concatenation is a pain in the ass. It also would routinely break with updates to org-mode.To be frank, my most significant gripe with Emacs these days is how difficult it is to maintain a statically versioned config. Once installed, things don't update; but whenever I re-run the setup (which happens more often than I want to admit when I'm hopping back and forth between machines), things will inexplicably break because a MELPA package updates.

I really liked the Clojure piece: that bit was very pleasant to work with. I didn't take the time to understand how some of the pieces *cough*optimus*cough* worked, but it still did exactly what I wanted.

Ditching org-mode

I ultimately ditched org-mode entirely, which was rather disappointing. The problem ultimately was that nothing supported it, and I got sick of rolling my own workarounds when Markdown covers 99% of my use case and the edge cases are covered by inline HTML.Yes, I know ox-md exists and will let me export to Markdown, but there isn't much point in exporting from org when it is giving me relatively little. The amount of fiddling required for the more advanced org features to render the way I want them to is too much for me.

I really like org-mode, so this was a tough sell for me. I even went through and created an org AST parser in Clojure (using the output of org-element as input) using Enlive to transform it to HTML, but it was finicky as hell and I knew that I would not want to update it when the output of org-element changes next.

Really, I could have plugged in markdown instead of org as the parser for my existing blog and gotten away with it. But no. That adventure is over for now; I have other adventures that are consuming what was formerly fiddling-with-blog-engine time.

Tufte CSS

I fell in love with Tufte CSS as soon as I saw it. I don't know if it is actually a great choice for my blog, but I'm gonna give it a shot! The highlights of it are that it has excellent font design, is incredibly simple, and has this lovely concept of margin notes. Margin notes are really simple in concept but I've never seen them on a blog before. I am rather fond of asides and frequently littered my posts with parentheticals containing them. I believe that margin notes are better suited for this.

No More Comments

I never really had issues with my Disqus comments, but I also never had much use for them either. Nobody commented. They provided no analytics and I doubt that I'd have used them anyway. If people want to comment on a blog post, they can email me, or tweet at @emallson.

Why Hexo?

I could describe some of the things I like about it, but honestly: it was the first batteries-included static blog engine for Node that I came across. It is doing everything I want for right now, so I'm unlikely to change it for the moment.

In Conclusion...

This is one part of my effort to update my site as a whole. Updating the style of my blog is an important piece. I have updated my main page as well, and am debating whether I should stick with Bootstrap or go with Tufte. I feel like I could accomplish a lot with that margin to give more info and character to specific events, but we will see. We will see.

Why I Stopped Using ES6

Written by J. David Smith
Published on 18 July 2015

This summer I've been interning at IBM (again) and have been the sole JavaScript programmer working on an isolated, experimental (but super exciting) part of the project. My mentor/boss Kris gave me complete control over the technology stack, with the sole requirement that I not go crazy with the choice of language.

Pushing ClojureScript or Elm didn't seem like a great way to spend my time, so I instead chose to toy with another relatively new bit of technology: EcmaScript 6. This page has a great overview of the new features coming to JavaScript with ES6, but most of them haven't actually made it in yet. I used the Babel transpiler to compile the code down from ES6 to ES5.

I was initially going to title this post "Why I Stopped Using Babel", but that would make it sound like there was some problem with Babel. I have had no issues whatsoever with Babel. The transpilation time was almost negligible (~1s for my code, combined with ~4s of browserify run time), it didn't perceivably impact performance (even when I was profiling inner loops written in ES6), and it never caused any bugs. On the whole, Babel is excellent software and if you want to use ES6 now, I highly recommend it. But there's the catch: you have to want to use ES6 now. And slowly, over the course of a couple of months, my desire to do so was sapped away (through no fault of Babel, and almost no fault of ES6).

The problems I had were mostly with integration. Two very important pieces of my workflow are Tern and Istanbul. Tern provides auto completion and type-guessing that is integrated into Emacs. Istanbul provides code coverage reports. Neither of them support ES6. With Istanbul, it was possible to work around it by running babel on my code and then covering the ES5 output. However, the coverage reports were then off because of the extra code that babel has to insert in order to properly simulate ES6 in ES5. Tern, on the other hand, did not have an option. If I chose to use only fat arrows, it would be workable because I discovered I could copy and past the code for normal functions to those and it worked more or less as expected. However, everything else was a wash.

So why not ditch Tern and put up with the Istanbul workaround until it gets ES6 support? As I used ES6 over the summer, I came to realize that in 99% of my usage, it wasn't much of an improvement. let is certainly useful (and the way it always should have been), arrow functions are awesome, and for(a of as) finally gives a sane looping construct in the language. Other than that, the only feature that's really exciting is destructuring, and while it is a bit of a pain to destructure complex data by hand, it isn't something that I have to do often. Classes were not of any use to me for this project either. None of my data made sense to represent as a class. Although in theory my React components would make sense as classes, I'd rather use the old, well-documented, clear method that has support for mixins (which would have to be implemented through inheritance were I to use ES6 classes).

The decision ultimately came down to three things:

  1. I wasn't getting much (just let, for of, fat arrows, and destructuring) from ES6
  2. ES6 vs ES5 is just one more thing my team would have to pick up after I'm gone.
  3. ES6→ES5 transpilation is a thing that somebody would have to support after I'm gone, and there is no telling how long it will be before it is no longer needed.

In the interest of making the life of my successor a tiny bit easier to manage, I ultimately chose to ditch ES6 for ye olde ES5. I had to throw out a bunch of the ES6 prototype code anyway, so there was very little additional cost in stripping it out of the code. Ultimately, I believe that this was the right decision for this project. Although losing those few additional features I was using was a bit painful, gaining the proper support of my tools and losing the incidental complexity of transpilation was worth it I think.

I'll probably still use ES6 with Babel for small side projects. (Anything large won't be in JS, even if it compiles to it!) If you want to try out ES6, Babel is a very safe and easy way to do it. I look forward to the day that ES6 has widespread support and Babel is…well, still needed for ES7 transpilation, but that's for another day.

Postscript

I don't like the import syntax and don't even get me started on classes and inheritance in ES6.

Anarchy Online: Why?

Written by J. David Smith
Published on 23 May 2015

Anarchy Online is a weird game. It is ancient; unwieldy in a way that only ancient games can be. The interface is bad, the gameplay is stale, and I can't think of a single reason to keep playing it. But I do. AO is one of those games that I keep coming back to. I can't help but wonder why.

img

I started playing AO a bit more than a decade ago, right when they began allowing players to play for free. Free players (colloquially known as 'froobs') have access only to the base game and the Notum Wars boosters, not any of the (4 at present) expansion packs. I played on and off as a froob for much of that period, never reaching higher than level 80 (of 200).

So why do I keep coming back? More than that: why the hell did I pick up the full expansion set this last time around? It was only $20, but still: Why? I am beginning to understand, I think. The game is one giant puzzle.

I was playing my new Fixer, running around in the Shadowlands, trying to figure out where to go next to keep leveling. I googled it, found some info, and set about trying to act on it. And failed over and over again. Dangerous enemies were between me and my goal. As of writing this, I have yet to figure out a way to slip past them.

It isn't that these enemies are over-leveled for me either: they are on level, and I can fight one and sometimes even two at a time without dying. However, every entry point seems to set me against situations where I fight minimum two and often three of these creatures.

There are many possible ways I could deal with this. Maybe I need to temporarily blow some IP (for the uninitiated: IP increase skills) in Concealment and sneak past them. Maybe I need to go hunt for better nanos and the requisite buffs to equip and cast them. Maybe I need a better gun (or two). I don't know.

As someone who loves puzzles and is absolutely unconcerned with reaching the level cap in a timely manner, I enjoy this. The struggle just to succeed. I have fond memories of pugging ToTW on my Agent (Emallson – my namesake), pushing all the way to the legionnaires for efficient XP or the final boss encounter for the wonderful loot (though I can't remember these days what he drops). Getting there as a solo player without any consistent help was hard. For about a month I was stuck on level 41, continuously dying before dinging and feeding the XP into my bonus pool (Aside: dying loses XP, which goes into a bonus pool that gives you 1.5x XP until you've regained all of it. I really like this system).

Again: it was a puzzle. How do I survive? What can I change? Where do I go? Who do I work with? It was fun. It is fun. This is why I still play this ugly, unwieldy game. Come to think: its unwieldiness actually feeds into that. It gives you most of the information you could reasonably ask for, but it's scattered around. Figuring out which nanos I can buff into reasonably requires finding not only what nanos I can get (in the shop) but also what buffs I can get cast on me (by an MP most often), what weapons I can pull from missions without spending too much on the search is something that doesn't have a good answer because of the QL system, etc.

There are a lot of things that I like about this game. There are enough of them that I feel I can look past the ugliness and unwieldiness to enjoy it. It's fun to explore this world. And that's what I want from a game: fun.

2014 in Review

Written by J. David Smith
Published on 12 January 2015

2014 was a big year for me. More opportunities presented themselves, more things changed, and more events occurred than in any time of my life prior. It's time for me to review some of the big points; to reflect on what went well and what didn't.

Interning at IBM

When I applied for internships in December of 2013, I wasn't sure what would happen. I applied to big names – Google, Microsoft, IBM, and others as I did the year prior. In 2012-2013, I got no responses. In 2013-2014, I got many. My applications to both Google and IBM were accepted, Riot Games asked for an interview (which I unfortunately had to decline because I'd already accepted IBM's offer), and Microsoft ignored my existence (maybe because my resumé is slathered in Linux tooling and has not a whiff of Microsoft on it).

I struggled for weeks with the decision between Google and IBM. Working at Google is a dream job, but there was a catch: the project I would be working on there was boring. Meanwhile, the project I was offered at IBM was really cool and exciting. At the time, it involved significant open-source contributions. Although it changed later, the change helped refine the project goals and clarify what my team would be doing.

In the end, I chose IBM. I was both looking forward to and dreading starting there at the end of May. What if I had chosen incorrectly? Once we got started, however, all my doubt vanished. The project turned out to be just as exciting as it had sounded. Even better: I had the pleasure of working with a phenomenal group of people. On the IBM side, we had a fantastic manager (Ross Grady) and great support from the group we were working with.

On the intern side, things couldn't have been better. My team was phenomenal: John and Walker were (and are) great technically, and all four of us (me, John, Walker, and Chris) worked together without even a hint of an issue throughout the Summer. What's more, I was surprised at how welcome I felt in the intern group. I've never been very comfortable socially, and yet by the end of the Summer there is but one that I'd not call a friend.

The biggest benefit of the internship for me was not the technical knowledge I gained, the skills I developed, money I made. It was the opportunity to work with these people. Prior to this, I had never had the chance to work with other programmers. I'd worked in a research lab, but that is a very different focus. Seeing how capable my fellow interns were and realizing that I was actually capable of keeping up with them was a tremendous confidence boost for me.

I have no regrets about my decision to work at IBM this past Summer. I came out of it knowing more, having more friends and contacts, and with several offers for positions at IBM. I ended up declining all of them to pursue a PhD, but set up an internship with one of the security software teams for Summer 2015.

The Interview

In the middle of the Summer, I got a wholly unexpected phone call: a Google recruiter contacted me about interviewing for a full-time position. At the time, my plans for the future were undecided but leaning heavily towards the pursuit of a PhD. I told him that I would be willing to talk more after the Summer ended, when I had more time.

When I followed up with him in August/September, things moved rapidly. I was able to skip the phone interviews because I'd done well enough on the ones for the internship to receive an offer. I got to fly to California and do interview in person! Working full-time at Google requires passing a high bar, so being interviewed indicates that I may be close to it.

In the end, I did not receive an offer. However, I was thrilled at the thought that I might be capable of reaching and surpassing the skill level needed for entry. This also forced me to mentally work out how to deal with serious rejection. I have been generally successful throughout my life, and hadn't had any rejection on this level before. I am glad that it came at a time when I had the opportunity to stop and think about it, rather than a super-busy season.

The Fulbright Program

I also began working on an application to the Fulbright U.S. Student Program in the summer. This program – if I were accepted – would let me study at a school almost anywhere in the world. The program grant covers one year, but I will be able to build a case for financial aid and visa for continuing on should I desire.

The application for this is for the most part not too bad. However, the two essays that go along with it (Personal Statement & Statement of Purpose) were especially difficult. I had never written anything like them before and was ill-prepared to do so. The advisor at UK was incredibly helpful in this, and I believe that I ended up with a competitive application. Regardless, I spent a solid month and a half thinking about nothing else. This prepared me well to write the statements for grad school applications, but was a significant time sink.

The worst part about this application is that I won't know the result until March of this year, while the deadline was September of last year. The long waiting period is killer, and is a problem I am facing in other areas as well.

Graduate School Applications

This is where I made my biggest mistake of the year: I did not work on grad school applications on Thanksgiving break. I took the week off: I slept, I played video games, I wrote code. I did not apply to grad school. Because of this, I was ill-prepared to meet the popular 15 December deadline. I was more prepared to meet the 1 January deadline that others have, but between the insanity of finals week (15-20 Dec.) and Christmas, ended up being largely tardy with that as well. (Also, far fewer schools have the later deadline)

I learned in 2012/2013 not to wait so long. I made a point of doing internship applications in '13 on Thanksgiving break so as to not miss deadlines. I learned the lesson, and then in arrogance forgot it. I applied to four schools: MIT, Texas A&M, UFlorida and UKansas. I have already been accepted into UKansas (0.0), but we'll see what happens.

I probably won't hear back from the other three schools until mid-March. I will have little enough time to make a decision, and will have to start planning for the Fall immediately. What really gets me is simply the waiting period. I do not know what will happen. I cannot realistically make any plans for or assumptions about after the summer until March. It sucks. I don't like it.

Goals for 2014

I didn't really set goals for 2014. One that I stumbled upon through meditation on Tom Shear's (Assemblage 23) Otherness. This is a long-term goal: be a better person. I started trying to write down a concrete list of this while writing this blog post, but I will need to think about it more. I realize how incredibly wishy-washy 'be a better person' is, and need to nail it down so I know what I'm going for. Details will be a blog post sometime in the next week.

Looking Forward: Goals for 2015

I am not a fan of New Years resolutions, and thus have none. However, over the course of last semester I became of several deficiencies in my overall behavior. In particular: my aversion to lists and my inconsistency.

Lists are helpful tools, yet I often do not use them. I saw how my dad became dependent on his lists to remember things and suppose I overreacted. I started keeping lists of assignments and due dates during this semester, and it helped reduce the number of times that I missed an assignment due to forgetfulness.

This is one method of moving towards my present goal: becoming more consistent. Self-discipline is not one of my strong points, but I have been working on improving. The impact of this will be better control over what I buy, what I eat, and how I spend my time. It meshes well with my goal of 'be a better person' (lol), as control will allow me to be who I want to be.

I have a long way to go.

Evaluating JavaScript in a Node.js REPL from an Emacs Buffer

Written by J. David Smith
Published on 01 June 2014

For my internship at IBM, we're going to be doing a lot of work on Node.js. This is awesome: Node is a great platform. However, I very quickly discovered that the state of Emacs ↔ Node.js integration is dilapidated at best (as far as I can tell, at least).

A Survey of Existing Tools

One of the first tools I came across was the swank-js / slime-js combination. However, when I (after a bit of pain) got both setup, slime promptly died when I tried to evaluate the no-op function: (function() {})().

Many pages describing how to work with Node in Emacs seem woefully out of date. However, I did eventually find nodejs-repl via package.el. This worked great right out of the box! However, it was missing what I consider a killer feature: evaluating code straight from the buffer.

Buffer Evaluation: Harder than it Sounds

Most of the languages I use that have a REPL are Lisps, which makes choosing what code to run in the REPL when I mash C-x C-e pretty straightforward. The only notable exceptions are Python (which I haven't used much outside of Django since I started using Emacs) and JavaScript (which I haven't used an Emacs REPL for before). Thankfully, while the problem is actually quite difficult, a collection of functions from js2-mode, which I use for development, made it much easier.

The first thing I did was try to figure out how to evaluate things via Emacs Lisp. Thus, I began with this simple function:

(defun nodejs-repl-eval-region (start end)
  "Evaluate the region specified by `START' and `END'."
  (let ((proc (get-process nodejs-repl-process-name)))
    (comint-simple-send proc (buffer-substring-no-properties start end))))

It worked! Even better, it put the contents of the region in the REPL so that it was clear exactly what had been evaluated! Whole-buffer evaluation was similarly trivial:

(defun nodejs-repl-eval-buffer (&optional buffer)
  "Evaluate the current buffer or the one given as `BUFFER'.

`BUFFER' should be a string or buffer."
  (interactive)
  (let ((buffer (or buffer (current-buffer))))
    (with-current-buffer buffer
      (nodejs-repl-eval-region (point-min) (point-max)))))

I knew I wasn't going to be happy with just region evaluation, though, so I began hunting for a straightforward way to extract meaning from a js2-mode buffer.

js2-mode: Mooz is my Savior

Mooz has implemented JavaScript parsing in Emacs Lisp for his extension js2-mode. What this means is that I can use his tools to extract meaningful and complete segments of code from a JS document intelligently. I experimented for a while in an Emacs Lisp buffer. In short order, it became clear that the fundamental unit I'd be working with was a node. Each node is a segment of code not unlike symbols in a BNF. He's implemented many different kinds of nodes, but the ones I'm mostly interested in are statement and function nodes. My first stab at function evaluation looked like this:

(defun nodejs-repl-eval-function ()
  (interactive)
  (let ((fn (js2-mode-function-at-point (point))))
    (when fn
      (let ((beg (js2-node-abs-pos fn))
            (end (js2-node-abs-end fn)))
        (nodejs-repl-eval-region beg end)))))

This worked surprisingly well! However, it only let me evaluate functions that the point currently resided in. For that reason, I implemented a simple reverse-searching function:

(defun nodejs-repl–find-current-or-prev-node (pos &optional include-comments)
  "Locate the first node before `POS'.  Return a node or nil.

If `INCLUDE-COMMENTS' is set to t, then comments are considered
valid nodes.  This is stupid, don't do it."
  (let ((node (js2-node-at-point pos (not include-comments))))
    (if (or (null node)
            (js2-ast-root-p node))
        (unless (= 0 pos)
          (nodejs-repl–find-current-or-prev-node (1- pos) include-comments))
      node)))

This searches backwards one character at a time to find the closest node. Note that it does not find the closest function node, only the closest node. It'd be pretty straightforward to incorporate a predicate function to make it match only functions or statements or what-have-you, but I haven't felt the need for that yet.

My current implementation of function evaluation looks like this:

(defun nodejs-repl-eval-function ()
  "Evaluate the current or previous function."
  (interactive)
  (let* ((fn-above-node (lambda (node)
                         (js2-mode-function-at-point (js2-node-abs-pos node))))
        (fn (funcall fn-above-node
             (nodejs-repl–find-current-or-prev-node
              (point) (lambda (node)
                        (not (null (funcall fn-above-node node))))))))
    (unless (null fn)
      (nodejs-repl-eval-node fn))))

You Know What I Meant!

My next step was to implement statement evaluation, but I'll leave that off of here for now. If you're really interested, you can check out the full source.

The final step in my rather short adventure through buffevaluation-land was a *-dwim function. DWIM is Emacs shorthand for Do What I Mean. It's seen throughout the environment in function names such as comment-dwim. Of course, figuring out what the user means is not feasible – so we guess. The heuristic I used for my function was pretty simple:

This is succinctly represent-able using cond:

(defun nodejs-repl-eval-dwim ()
  "Heuristic evaluation of JS code in a NodeJS repl.

Evaluates the region, if active, or the first statement found at
or prior to the point.

If the point is at the end of a line, evaluation is done from one
character prior.  In many cases, this will be a semicolon and will
change what is evaluated to the statement on the current line."
  (interactive)
  (cond
   ((use-region-p) (nodejs-repl-eval-region (region-beginning) (region-end)))
   ((= (line-end-position) (point)) (nodejs-repl-eval-first-stmt (1- (point))))
   (t (nodejs-repl-eval-first-stmt (point)))))

The Beauty of the Emacs Development Process

This whole adventure took a bit less than 2 hours, all told. Keep in mind that, while I consider myself a decent Emacs user, I am by no means an ELisp hacker. Previously, the extent of my ELisp has been one-off advice functions for my .emacs.d. Being a competent Lisper, using ELisp has always been pretty straightforward, but I did not imagine that this project would end up being so simple.

The whole reason it ended up being easy is because the structure of Emacs makes it very easy to experiment with new functionality. The built-in Emacs Lisp REPL had me speeding through iterations of my evaluation functions, and the ability to jump to functions by name with a single key-chord was invaluable. This would not have been possible if I had been unable to read the context from the sources of comint-mode, nodejs-repl and js2-mode. Even if I had just been forced to grep through the codebases instead of being able to jump straight to functions, it would have taken longer and been much less enjoyable.

The beautiful part of this process is really how it enables one to stand on the shoulders of those who came before. I accomplished more than I had expected in far, far less time than I had anticipated because I was able to read and re-use the code written by my fellows and precursors. I am thoroughly happy with my results and have been using this code to speed up prototyping of Node.js code. The entire source code can be found here.