Mocking the Google Maps API in Jasmine/PhantomJS

At Public Lab we have several Javascript repositories that compile using grunt. Most of these end up included in our Ruby on Rails app using npm packages, which means sometimes in order to make a change on the website I have to download the JS repository, make the changes (adding new functions, error catching), bump the version number, ask someone to publish the changes, then use npm in the Rails app to update the package.

Last week one of the changes I had to make was with the Google Maps API we use: I had to switch from the server-side version to the client-side javascript version. I made the appropriate changes, made sure everything was working smoothly… and then discovered that the Jasmine tests were throwing a warning. Since it was a warning and not an error we tried ignoring it, but the API change was also causing problems in other repositories that used the main one so I realized I had to figure out a fix.

Testing Functions That Use The Google Maps API

This is what I saw when I ran tests in one of the JS repositories. ReferenceError: Can't find variable: google

And this is what it caused in one of our other repositories:

I spent a lot of time dissecting each line of code and tests, making changes, trying different things. I’d spend hours working on something I thought could fix it only to find out that nope, it wasn’t going to work. At first I was just trying to get the API functioning in the tests, I couldn’t figure out why the google object didn’t exist after calling the api key.

My first epiphany was when I looked at the actual google API code and saw this:

window.google = window.google || {};
google.maps = google.maps || {};

This is a problem because in headless testing there is no window object! I tried various ways of declaring a empty window object and then letting the api continue as usual but that didn’t work. I finally realized that the best way to solve this problem – though possibly time-consuming to set up – was to mock out the google API. This would also allow me to add in specific tests for the functions that used geocoding, which we didn’t have. And again, at first I tried this in various ways with no success. I couldn’t get the helper file to work, I couldn’t call a function, I had to figure out how to get a constant variable to be passed into tests that could be modified by the app.

Google searches for testing the google maps api had among the results this page, which gave me a simple mock object to build on. After some trial and error I wrote a function for Geocoder because that’s the one my function was going to be calling. In order to do this I had to deconstruct this function and how it works, because it takes in a string and a callback function, then executes the function with the results and status.

google = {
  maps: {
    places: {
      AutocompleteService: function() {},
      PlacesServiceStatus: {
        INVALID_REQUEST: 'INVALID_REQUEST',
        NOT_FOUND: 'NOT_FOUND',
        OK: 'OK',
        OVER_QUERY_LIMIT: 'OVER_QUERY_LIMIT',
        REQUEST_DENIED: 'REQUEST_DENIED',
        UNKNOWN_ERROR: 'UNKNOWN_ERROR',
        ZERO_RESULTS: 'ZERO_RESULTS',
      }
    },
    Geocoder: function(stringObj, functionToDo) {
      functionToDo = functionToDo || function() {};
      functionToDo(response.results, "OK");
    },
    GeocoderStatus: {
      ERROR: 'ERROR',
      INVALID_REQUEST: 'INVALID_REQUEST',
      OK: 'OK',
      OVER_QUERY_LIMIT: 'OVER_QUERY_LIMIT',
      REQUEST_DENIED: 'REQUEST_DENIED',
      UNKNOWN_ERROR: 'UNKNOWN_ERROR',
      ZERO_RESULTS: 'ZERO_RESULTS',
    },
  }
};

Next I needed a response object to pass in. This is your test data, so this is what you will expect to see when your function works correctly. Change it at will!

response = {
  "results" : [
     {
        "address_components" : [
           {
              "long_name" : "Winnetka",
              "short_name" : "Winnetka",
              "types" : [ "locality", "political" ]
           },
           {
              "long_name" : "New Trier",
              "short_name" : "New Trier",
              "types" : [ "administrative_area_level_3", "political" ]
           },
           {
              "long_name" : "Cook County",
              "short_name" : "Cook County",
              "types" : [ "administrative_area_level_2", "political" ]
           },
           {
              "long_name" : "Illinois",
              "short_name" : "IL",
              "types" : [ "administrative_area_level_1", "political" ]
           },
           {
              "long_name" : "United States",
              "short_name" : "US",
              "types" : [ "country", "political" ]
           }
        ],
        "formatted_address" : "Winnetka, IL, USA",
        "geometry" : {
           "bounds" : {
              "northeast" : {
                 "lat" : 42.1282269,
                 "lng" : -87.7108162
              },
              "southwest" : {
                 "lat" : 42.0886089,
                 "lng" : -87.7708629
              }
           },
           "location" : {
              "lat" : function() { return 25.0 },
              "lng" : function() { return 17.0 }
           },
           "location_type" : "APPROXIMATE",
           "viewport" : {
              "northeast" : {
                 "lat" : 42.1282269,
                 "lng" : -87.7108162
              },
              "southwest" : {
                 "lat" : 42.0886089,
                 "lng" : -87.7708629
              }
           }
        },
        "place_id" : "ChIJW8Va5TnED4gRY91Ng47qy3Q",
        "types" : [ "locality", "political" ]
     }
  ],
  "status" : "OK"
}

Finally I had to re-create the Geocoder function using a Jasmine Spy. I used some of the instructions here to help me figure out how to do this – the Jasmine Docs aren’t terribly helpful. It’s spying on the google.maps object we just created, but also creating a Spy Object as the variable geocoder. This is necessary for my code because it’s expecting this object to exist, Now when one of the functions tries to use the geocoder object it will use this one instead of the real google.maps version.

var geocoderSpy;
var geocoder;

function createGeocoder() {
  geocoderSpy = spyOn(google.maps, 'Geocoder');
  geocoder = jasmine.createSpyObj('Geocoder', ['geocode']);
  geocoderSpy.and.returnValue(geocoder);
}

Now in my spec file, after all of that has been initialized and the fixture has been loaded, I can run my tests. I call a fake geocoder function and pass in the response object and whatever status I want to test for. When I call a function from my code – getPlacenameFromCoordinates – I pass in variables that it expects and the callback function that it will execute. It will use the created fake geocoder to process that data, look up the appropriate data in the results object, in this case I expect to see the Country name being saved.

beforeAll(function() {
  createGeocoder();
});

beforeEach(function() {
  var fixture = loadFixtures('index.html');
});

it("Checks if getPlacenameFromCoordinates returns country name for location precision 0", function() {
  geocoder.geocode.and.callFake(function(request, callback) {
    callback(response.results, google.maps.GeocoderStatus.OK);
  });

  var string = '';
  blurredLocation.getPlacenameFromCoordinates(42, 11, 0, function(result){
    string = result.trim();
  });
  expect(string).toBe('USA');
});

Jasmine Helper Files With Grunt

Second problem: moving the mock object to a helper file – why isn’t it working? There was a spec/support/jasmine.json file just as it says in the Jasmine Docs, everything was named correctly, but no files in the helpers/ folder were being loaded no matter how I formatted that functions in the helper file.

My clue for this one was when I changed jasmine.json to only load one of my spec files, but it still loaded all of them. Searching online brought me to this page on stack overflow: If you are using grunt to run jasmine then that data gets added to the Gruntfile.js instead. That explains it all!

So I was able to delete the support/ folder entirely, added my helpers into the Gruntfile.js:

jasmine: {
  src: "src/client/js/*.js",
  options: {
    specs: "spec/javascripts/*spec.js",
    helpers: "spec/helpers/*.js",
    vendor: [
      "node_modules/jquery/dist/jquery.js",
      "dist/Leaflet.BlurredLocation.js",
      "node_modules/jasmine-jquery/lib/jasmine-jquery.js"
    ]
  }
},

Now I was able to move the code into the spec/helpers/google_mock.js file and it loaded automatically! Now my tests are set up correctly, able to mock the geocoder object for testing, and it runs with no errors! It feels good to see those green checkmarks. 🙂

Everybody Struggles

I had to take several mental detours during my contribution period to go down the rabbit hole a few times, for example on Rails Active Query. I knew what I wanted to do but I needed to figure out how Rails handled these queries in order to get the result I needed. I also had to learn a lot about Capybara testing – and testing in general!

One area that has been full of stumbling blocks (and then growth!) is GIT. I’ve got a pretty good handle on the regular workflow, of push and pull and rebase. The problems happen when I do something wrong and I don’t know how to back out of it. Then I get to search google for possible answers and try them out. Usually it works. Sometimes I make a bigger mess and have to ask my mentors what to try next.

  • Made a commit then wish I hadn’t? I can use either git revert (will create a new commit undoing the changes) or git reset --hard <HASH> (resets back to a specific commit and throws away anything after that).
  • Committed to master instead of my branch? Ooops, I had to learn git cherry-pick.
  • Somehow added extra commits to my branch when cherry-picking? By using git rebase -i HEAD~8 I was able to go down the list and select which commits to keep and what not to.

A Jasmine Test Failure

One of the stumbling blocks I’ve encountered is when my code works properly in the app but the testing all fails. I had this happen on one of our Javascript repositories, which uses Jasmine testing (which is different from the Capybara testing that I had used prior in our Rails repo). The tests had been passing until some recent commits and a rebase conflict merge so I needed to figure out exactly what was causing the tests to fail.

First stage in error debugging: google the error! Copy and paste it, see what pops up. Once in a while you’ll get lucky and the answer will be right there on the front page, someone else already solved it for you. But my error was extremely vague, “undefined is not an object” – and the same error on every test. This wasn’t just a situation of one test failed because of a change, this was all testing is completely broken.

Considering the scale of the error I wanted to double check that testing was working on my local environment at all. However, when I switched branches it gave me some errors on the command line and wouldn’t let jasmine run at all… even though I had just run them in my feature branch. This has been happening to me with some regularity, and it’s very frustrating! Sometimes I get a “grunt not found” message, which makes me confused because I use grunt all the time in that repo so how can it just disappear? I’ve learned to run npm install just in case some dependencies were updated, that has fixed several of my issues in the past. And it did fix it this time!

First thing I tested was running grunt jasmine in the main repo. All the tests pass, so this is a good starting point! I always like to check a known-good branch just to make sure the tests are running correctly. Once I discovered that the issue was not my feature branch at all, it was something else causing all the tests to fail! Better to learn that first rather than after spending hours searching code.

I knew there were very few changes between main and feature-branch so my next step was to compare them to see if some code had been accidentally deleted or changed when I handled the merge conflict. Google brought me to this page on Stack Overflow. I can use git diff branch1..branch2 – but that outputs text and will be hard for me with a large file. That page also mentioned the tool Meld. This looks very useful on its own for comparing one file to another file, but how to use it with GIT? Their help files say that it can be done but didn’t give specific instructions. That page also had an answer for that: git config --global diff.tool meld to set Meld as the default tool for git diff, and then git diff branch1..branch2 to open it. However, one very important note here: this will open every single page in that folder, one by one. For each file there is a Y/N prompt for opening it in meld or not. I have a whole lot of files so I need a way to specify which one I want to look at! Thankfully google told me that I can easily do git diff branch1..branch2 folder/file.js to target which one I want. Excellent! I opened it up and looked through carefully only to discover… the code looks correct, only my functions were added, there’s nothing obviously wrong.

So my next stop is to identify what code is causing the failure. I added 4 functions, so first I commented them out and ran the code – it passes. I add one back in, it fails. I leave the function there but comment out all the lines inside it, it passes. This is where I get stuck for a while because at first it seems like any code inside causes it to fail, and that doesn’t make sense. I had a forced break for a while, which I didn’t want but in the end was possibly a good thing! When stuck it can be actually really helpful to walk away and put your brain on something else for a while. Sometimes you need your brain to shift gears and get out of a stuck path – if you find yourself trying or thinking the same thing repeatedly, with no progress, then you probably need a break. When I came back I looked it over again. I had recently added a different new function that worked and passed the tests. So why did that one work and the new one didn’t? I scrolled through existing functions with an eye for what is different? And I noticed that all the other functions use var. I used let. I tried var in my failing function… and it passed! Problem identified!

The final step was refactoring one of my functions because it didn’t want to run with var. It was easy enough to find a different way to compare two values, so that was a quick fix.

I still don’t understand why jasmine caused all kinds of errors by using let. I was only using temporary variables inside a function to do a simple task, there was no scope or re-declaration conflict. But the tests are now passing and my code was able to be pushed up for merging. And next time I’m writing code in this repo I’ll be very wary of how I declare variables!

Applying for Outreachy

I already wrote about my experience setting up a dual-boot system while I started the application process. I had a very busy month while I worked on contributions and submitted my final proposal. Last Tuesday I got the email that I have been selected as an Intern on the Public Lab project! I should probably make a post about how the rest of the application process went.

My first word of advice is to choose your top project candidates from the list of options ahead of time, then try to get the repositories installed and running on your system. Having to dual-boot my system on the day I wanted to start contributions was stressful! I also recommend really looking at the code base and trying to get familiar with it, it can take some time to understand what’s going on in a large project.

Once the repository was working I started looking for a small contribution to make. Make sure you read any instructions the organization has; Public Lab wants everyone to make a small, very easy first-timers-only issue as an introduction. Unfortunately there are going to be a lot of other people also looking for a beginners issue, so be patient and polite.

I made my first few small pull requests and felt pretty good about it. I’ve officially contributed to open source, hurrah! I wanted to show the mentors of the project that I am capable and skilled at code, friendly and helpful with others, and willing to listen to help and suggestions that they give. Public Lab really values cooperation and working as a team, it’s more than just submitting a contribution and being done. Then look around and decide what larger contributions you could make, both as a learning experience and also to show what you are capable of. I looked for something that was a challenge and a little intimidating, but not so complicated I would be completely stuck. I aimed to complete different types of issues: I had some in Rails, some CSS, and some Javascript. This not only was good to show my abilities, but also to remind myself that I am capable of taking on different challenges and succeeding!

I had prior experience using GIT for my own project, but I had never contributed to open source before. Public Lab has instructions and a posted Git Workflow; I don’t know if all open source projects have that available, but it sure is helpful! I also watched some videos on GIT (there are tons available on YouTube, I’m sure most are good). One that stood out was Advanced Git Tutorial. It really broke it down into the pieces I needed to understand (mostly) what was happening. Our workflow involves Rebase and I think it would be easy to just follow the instructions without understanding what it is really doing – and I don’t like not understanding the big picture. A big part of this internship process is challenging yourself to try new things and build new skills. Take the time to dig into subjects, figure out what’s going on.

Once I got some pull requests in I felt good about my chances but also very nervous about who else was applying. I stayed active all month, spending my time as if I was an intern and I was working on projects. I also spent a lot of time building my proposal, which required me to look at the work they wanted done and then break it down and organize it. I was unsure how that would translate in the real internship since Public Lab is using two interns to work on a group project, but I based my proposal on what I would do if I were working by myself.

The final application was due on Nov 4. It was a relief to be done with the proposal and have it out of my hands, but it was also a tough wait, not knowing if I was going to be working in December or not. On November 26 the emails were sent out and I learned that I had been selected for an internship spot! I began working on December 2; the project ends on March 3. I am working on integrating geolocation and mapping features on PublicLab.org! I am so excited to have this opportunity!

Tools I Am Using

I am using Notion to create a wiki page of notes and links to the project’s workflow, testing documentation, style guide, and planning posts. Initially I set up a To-Do template for listing the issues/pull requests I’m working on, but once we started the Outreachy project I used the Roadmap template. I love that it has space for notes on each issue, and I can add and remove properties to make it what I need. Plus I added my colleague intern so we can work on the project together and stay informed of each others progress.

One of the other tools I am using daily is the FireShot Chrome Extension screenshot tool. It gives you the option of saving an image of just the visible portion of the webpage, saving the entire webpage from top to bottom, or a selection chosen with the mouse. I am really happy with the free version. I know there are a lot of different screenshot tools out there, just find one that works for you because you will need it frequently when working in open source. Public Lab prefers to have a screenshot of any UI changes before merging anything, and of course it’s a very good idea to post a screenshot when you have discovered a bug, or when something you tried is having odd results. It is so much easier for someone to help you when they can see what is happening!

I have used Photoshop many times in the past for design and photography editing, so I figured I was all set. While creating my proposal I did use it for a few graphics, but most of the mockups and graphics I did in Adobe XD, which I am complete new to. I have to rave about it, it makes life so much easier! It automatically snaps to lining elements up and matching margins. It’s so quick to add text and rounded borders, to do all the things that we do for websites. You can do it in Photoshop, but it’s persnickety. The only downside to both apps is that I have to boot into Windows to use them – and my Windows installation is very sluggish at the moment, I think it needs a fresh install. But after my last experience with installing an OS I’m not feeling very eager!

Setting Up A Dual-Boot System for Outreachy

About a month ago I learned about Outreachy Internships from my Women in Web Dev group. An internship sounded really exciting! I’m building on my coding skills every day but what I really need is some experience working with others. Open Source is something I’ve wanted to do but wasn’t sure how to start; this opportunity was just the nudge I needed. I filled out my initial application and waited to hear back. On October 1st I received an email saying my application was accepted and it was time to start contributing!

I looked through all of the projects available – it was hard to narrow it down – and decided on PublicLab.org. I really love the reason behind the non-profit, science outreach is something I feel strongly about. Their list of requirements seemed to match well with what I know and I’ve been doing, so it seemed like a really good fit to me. I joined the chat and started looking through their GitHub repos. Their main repo is in Ruby, which I don’t have experience with, but Rails is set up very much the same as Laravel. I was excited to look at all the code structure and realize that it felt very familiar.

Getting started on this project was a very big challenge – bigger than I anticipated. I have not in the past had any issues setting up developing environments for Laravel or Python. Ruby on Rails, however… did not go well. I got it all installed with some trial and error but unfortunately the server simply wouldn’t run on my Windows machine. I tried using the Windows Subsystem for Linux, which sounded like the perfect solution: I would use Linux to run the server and have access to the full linux terminal, but I could do my developing in my Windows system. But the server still wouldn’t start. Googling the question “how to run Rails on Windows” generally returned the answer of: “don’t.”

I needed to dual-boot. Considering I’d just installed Ubuntu on a server for my other project I decided to stick with that. Partitioning my drive wasn’t hard; creating a live-boot usb of Ubuntu wasn’t hard; even booting up into the live-boot wasn’t hard! I started installing Ubuntu into my empty partition and expected I’d be done in about an hour. It didn’t go so well.

The first problem was that I couldn’t get back into Windows. I should have been able to update the boot record to see both options and just switch back and forth – that didn’t work. A lot of google searching and experimenting later I figured out that for some reason my Windows installation was using MBR, but Ubuntu was using GBT/EUFI. They didn’t play well together, I had to change one. I chose to update Windows to EUFI, which thankfully was easy with this guide. Unfortunately when updating the BIOS to EUFI I managed to break it and my computer wouldn’t boot – just a black screen. So my 7 year old got to help me open up my computer tower and help me reset the CMOS battery. 🙂 That fixed the BIOS and off I went to update the boot record in Ubuntu, now that it could see Windows on the drives. Seeing a dual-boot screen was delightful.

The second problem was far more frustrating: Ubuntu kept freezing. I would be able to install something, or log in and take a look around, or open up a program… and it would completely lock up. I had to hard reset every time, which makes me wince. Now this freezing created several bigger problems when it would freeze in the middle of installing something. I ended up with a bunch of errors and couldn’t really do much of anything before I’d have to restart. The couple of times it froze while I was updating the boot record trying to fix problem #1 had pretty bad results. I learned a lot about live-boot usbs and repairing boot sectors.

The problem was I didn’t have any idea why it was freezing. Was it because my motherboard is pretty old? Was there something wrong with the drive? Was there something wrong with the Ubuntu installation drive? I’m not familiar enough with Linux to do all kinds of system checking. One site said to upgrade the kernel to fix freezing problems but that seemed kind of risky for me to try. Then on one website I saw someone mention an issue with their graphic card drivers. That was worth a try, so I learned how to check my driver versions and update them from the generic to the nvidia versions. And just like that, like someone had waved a magic wand, the problem was fixed. Of course I didn’t know for sure if it had or not, I had to just stare at the screen while holding my breath for an increasing length of time. The longer it went without freezing the better I felt about my chances. At the end of the evening I was confident all was well!

So lesson learned, next time I install Ubuntu update the video drivers first.

After that I had no problems and could easily install Ruby and Rails and get the repo cloned and working. I did have to get my SSH keys set up again, installed Chrome, installed a text editor (I chose VSCode this time, vs Sublime I was using in Windows), and generally get things set up the way I like it. But so far it’s going really well. The best discovery was learning that I can access all of my Windows files from Ubuntu. That makes life a whole lot easier!

And the best part is now the Rails server will run!