Mocking the Google Maps API in Jasmine/PhantomJS

At Public Lab we have several Javascript repositories that compile using grunt. Most of these end up included in our Ruby on Rails app using npm packages, which means sometimes in order to make a change on the website I have to download the JS repository, make the changes (adding new functions, error catching), bump the version number, ask someone to publish the changes, then use npm in the Rails app to update the package.

Last week one of the changes I had to make was with the Google Maps API we use: I had to switch from the server-side version to the client-side javascript version. I made the appropriate changes, made sure everything was working smoothly… and then discovered that the Jasmine tests were throwing a warning. Since it was a warning and not an error we tried ignoring it, but the API change was also causing problems in other repositories that used the main one so I realized I had to figure out a fix.

Testing Functions That Use The Google Maps API

This is what I saw when I ran tests in one of the JS repositories. ReferenceError: Can't find variable: google

And this is what it caused in one of our other repositories:

I spent a lot of time dissecting each line of code and tests, making changes, trying different things. I’d spend hours working on something I thought could fix it only to find out that nope, it wasn’t going to work. At first I was just trying to get the API functioning in the tests, I couldn’t figure out why the google object didn’t exist after calling the api key.

My first epiphany was when I looked at the actual google API code and saw this:

window.google = window.google || {};
google.maps = google.maps || {};

This is a problem because in headless testing there is no window object! I tried various ways of declaring a empty window object and then letting the api continue as usual but that didn’t work. I finally realized that the best way to solve this problem – though possibly time-consuming to set up – was to mock out the google API. This would also allow me to add in specific tests for the functions that used geocoding, which we didn’t have. And again, at first I tried this in various ways with no success. I couldn’t get the helper file to work, I couldn’t call a function, I had to figure out how to get a constant variable to be passed into tests that could be modified by the app.

Google searches for testing the google maps api had among the results this page, which gave me a simple mock object to build on. After some trial and error I wrote a function for Geocoder because that’s the one my function was going to be calling. In order to do this I had to deconstruct this function and how it works, because it takes in a string and a callback function, then executes the function with the results and status.

google = {
  maps: {
    places: {
      AutocompleteService: function() {},
      PlacesServiceStatus: {
        INVALID_REQUEST: 'INVALID_REQUEST',
        NOT_FOUND: 'NOT_FOUND',
        OK: 'OK',
        OVER_QUERY_LIMIT: 'OVER_QUERY_LIMIT',
        REQUEST_DENIED: 'REQUEST_DENIED',
        UNKNOWN_ERROR: 'UNKNOWN_ERROR',
        ZERO_RESULTS: 'ZERO_RESULTS',
      }
    },
    Geocoder: function(stringObj, functionToDo) {
      functionToDo = functionToDo || function() {};
      functionToDo(response.results, "OK");
    },
    GeocoderStatus: {
      ERROR: 'ERROR',
      INVALID_REQUEST: 'INVALID_REQUEST',
      OK: 'OK',
      OVER_QUERY_LIMIT: 'OVER_QUERY_LIMIT',
      REQUEST_DENIED: 'REQUEST_DENIED',
      UNKNOWN_ERROR: 'UNKNOWN_ERROR',
      ZERO_RESULTS: 'ZERO_RESULTS',
    },
  }
};

Next I needed a response object to pass in. This is your test data, so this is what you will expect to see when your function works correctly. Change it at will!

response = {
  "results" : [
     {
        "address_components" : [
           {
              "long_name" : "Winnetka",
              "short_name" : "Winnetka",
              "types" : [ "locality", "political" ]
           },
           {
              "long_name" : "New Trier",
              "short_name" : "New Trier",
              "types" : [ "administrative_area_level_3", "political" ]
           },
           {
              "long_name" : "Cook County",
              "short_name" : "Cook County",
              "types" : [ "administrative_area_level_2", "political" ]
           },
           {
              "long_name" : "Illinois",
              "short_name" : "IL",
              "types" : [ "administrative_area_level_1", "political" ]
           },
           {
              "long_name" : "United States",
              "short_name" : "US",
              "types" : [ "country", "political" ]
           }
        ],
        "formatted_address" : "Winnetka, IL, USA",
        "geometry" : {
           "bounds" : {
              "northeast" : {
                 "lat" : 42.1282269,
                 "lng" : -87.7108162
              },
              "southwest" : {
                 "lat" : 42.0886089,
                 "lng" : -87.7708629
              }
           },
           "location" : {
              "lat" : function() { return 25.0 },
              "lng" : function() { return 17.0 }
           },
           "location_type" : "APPROXIMATE",
           "viewport" : {
              "northeast" : {
                 "lat" : 42.1282269,
                 "lng" : -87.7108162
              },
              "southwest" : {
                 "lat" : 42.0886089,
                 "lng" : -87.7708629
              }
           }
        },
        "place_id" : "ChIJW8Va5TnED4gRY91Ng47qy3Q",
        "types" : [ "locality", "political" ]
     }
  ],
  "status" : "OK"
}

Finally I had to re-create the Geocoder function using a Jasmine Spy. I used some of the instructions here to help me figure out how to do this – the Jasmine Docs aren’t terribly helpful. It’s spying on the google.maps object we just created, but also creating a Spy Object as the variable geocoder. This is necessary for my code because it’s expecting this object to exist, Now when one of the functions tries to use the geocoder object it will use this one instead of the real google.maps version.

var geocoderSpy;
var geocoder;

function createGeocoder() {
  geocoderSpy = spyOn(google.maps, 'Geocoder');
  geocoder = jasmine.createSpyObj('Geocoder', ['geocode']);
  geocoderSpy.and.returnValue(geocoder);
}

Now in my spec file, after all of that has been initialized and the fixture has been loaded, I can run my tests. I call a fake geocoder function and pass in the response object and whatever status I want to test for. When I call a function from my code – getPlacenameFromCoordinates – I pass in variables that it expects and the callback function that it will execute. It will use the created fake geocoder to process that data, look up the appropriate data in the results object, in this case I expect to see the Country name being saved.

beforeAll(function() {
  createGeocoder();
});

beforeEach(function() {
  var fixture = loadFixtures('index.html');
});

it("Checks if getPlacenameFromCoordinates returns country name for location precision 0", function() {
  geocoder.geocode.and.callFake(function(request, callback) {
    callback(response.results, google.maps.GeocoderStatus.OK);
  });

  var string = '';
  blurredLocation.getPlacenameFromCoordinates(42, 11, 0, function(result){
    string = result.trim();
  });
  expect(string).toBe('USA');
});

Jasmine Helper Files With Grunt

Second problem: moving the mock object to a helper file – why isn’t it working? There was a spec/support/jasmine.json file just as it says in the Jasmine Docs, everything was named correctly, but no files in the helpers/ folder were being loaded no matter how I formatted that functions in the helper file.

My clue for this one was when I changed jasmine.json to only load one of my spec files, but it still loaded all of them. Searching online brought me to this page on stack overflow: If you are using grunt to run jasmine then that data gets added to the Gruntfile.js instead. That explains it all!

So I was able to delete the support/ folder entirely, added my helpers into the Gruntfile.js:

jasmine: {
  src: "src/client/js/*.js",
  options: {
    specs: "spec/javascripts/*spec.js",
    helpers: "spec/helpers/*.js",
    vendor: [
      "node_modules/jquery/dist/jquery.js",
      "dist/Leaflet.BlurredLocation.js",
      "node_modules/jasmine-jquery/lib/jasmine-jquery.js"
    ]
  }
},

Now I was able to move the code into the spec/helpers/google_mock.js file and it loaded automatically! Now my tests are set up correctly, able to mock the geocoder object for testing, and it runs with no errors! It feels good to see those green checkmarks. 🙂

Everybody Struggles

I had to take several mental detours during my contribution period to go down the rabbit hole a few times, for example on Rails Active Query. I knew what I wanted to do but I needed to figure out how Rails handled these queries in order to get the result I needed. I also had to learn a lot about Capybara testing – and testing in general!

One area that has been full of stumbling blocks (and then growth!) is GIT. I’ve got a pretty good handle on the regular workflow, of push and pull and rebase. The problems happen when I do something wrong and I don’t know how to back out of it. Then I get to search google for possible answers and try them out. Usually it works. Sometimes I make a bigger mess and have to ask my mentors what to try next.

  • Made a commit then wish I hadn’t? I can use either git revert (will create a new commit undoing the changes) or git reset --hard <HASH> (resets back to a specific commit and throws away anything after that).
  • Committed to master instead of my branch? Ooops, I had to learn git cherry-pick.
  • Somehow added extra commits to my branch when cherry-picking? By using git rebase -i HEAD~8 I was able to go down the list and select which commits to keep and what not to.

A Jasmine Test Failure

One of the stumbling blocks I’ve encountered is when my code works properly in the app but the testing all fails. I had this happen on one of our Javascript repositories, which uses Jasmine testing (which is different from the Capybara testing that I had used prior in our Rails repo). The tests had been passing until some recent commits and a rebase conflict merge so I needed to figure out exactly what was causing the tests to fail.

First stage in error debugging: google the error! Copy and paste it, see what pops up. Once in a while you’ll get lucky and the answer will be right there on the front page, someone else already solved it for you. But my error was extremely vague, “undefined is not an object” – and the same error on every test. This wasn’t just a situation of one test failed because of a change, this was all testing is completely broken.

Considering the scale of the error I wanted to double check that testing was working on my local environment at all. However, when I switched branches it gave me some errors on the command line and wouldn’t let jasmine run at all… even though I had just run them in my feature branch. This has been happening to me with some regularity, and it’s very frustrating! Sometimes I get a “grunt not found” message, which makes me confused because I use grunt all the time in that repo so how can it just disappear? I’ve learned to run npm install just in case some dependencies were updated, that has fixed several of my issues in the past. And it did fix it this time!

First thing I tested was running grunt jasmine in the main repo. All the tests pass, so this is a good starting point! I always like to check a known-good branch just to make sure the tests are running correctly. Once I discovered that the issue was not my feature branch at all, it was something else causing all the tests to fail! Better to learn that first rather than after spending hours searching code.

I knew there were very few changes between main and feature-branch so my next step was to compare them to see if some code had been accidentally deleted or changed when I handled the merge conflict. Google brought me to this page on Stack Overflow. I can use git diff branch1..branch2 – but that outputs text and will be hard for me with a large file. That page also mentioned the tool Meld. This looks very useful on its own for comparing one file to another file, but how to use it with GIT? Their help files say that it can be done but didn’t give specific instructions. That page also had an answer for that: git config --global diff.tool meld to set Meld as the default tool for git diff, and then git diff branch1..branch2 to open it. However, one very important note here: this will open every single page in that folder, one by one. For each file there is a Y/N prompt for opening it in meld or not. I have a whole lot of files so I need a way to specify which one I want to look at! Thankfully google told me that I can easily do git diff branch1..branch2 folder/file.js to target which one I want. Excellent! I opened it up and looked through carefully only to discover… the code looks correct, only my functions were added, there’s nothing obviously wrong.

So my next stop is to identify what code is causing the failure. I added 4 functions, so first I commented them out and ran the code – it passes. I add one back in, it fails. I leave the function there but comment out all the lines inside it, it passes. This is where I get stuck for a while because at first it seems like any code inside causes it to fail, and that doesn’t make sense. I had a forced break for a while, which I didn’t want but in the end was possibly a good thing! When stuck it can be actually really helpful to walk away and put your brain on something else for a while. Sometimes you need your brain to shift gears and get out of a stuck path – if you find yourself trying or thinking the same thing repeatedly, with no progress, then you probably need a break. When I came back I looked it over again. I had recently added a different new function that worked and passed the tests. So why did that one work and the new one didn’t? I scrolled through existing functions with an eye for what is different? And I noticed that all the other functions use var. I used let. I tried var in my failing function… and it passed! Problem identified!

The final step was refactoring one of my functions because it didn’t want to run with var. It was easy enough to find a different way to compare two values, so that was a quick fix.

I still don’t understand why jasmine caused all kinds of errors by using let. I was only using temporary variables inside a function to do a simple task, there was no scope or re-declaration conflict. But the tests are now passing and my code was able to be pushed up for merging. And next time I’m writing code in this repo I’ll be very wary of how I declare variables!

Applying for Outreachy

I already wrote about my experience setting up a dual-boot system while I started the application process. I had a very busy month while I worked on contributions and submitted my final proposal. Last Tuesday I got the email that I have been selected as an Intern on the Public Lab project! I should probably make a post about how the rest of the application process went.

My first word of advice is to choose your top project candidates from the list of options ahead of time, then try to get the repositories installed and running on your system. Having to dual-boot my system on the day I wanted to start contributions was stressful! I also recommend really looking at the code base and trying to get familiar with it, it can take some time to understand what’s going on in a large project.

Once the repository was working I started looking for a small contribution to make. Make sure you read any instructions the organization has; Public Lab wants everyone to make a small, very easy first-timers-only issue as an introduction. Unfortunately there are going to be a lot of other people also looking for a beginners issue, so be patient and polite.

I made my first few small pull requests and felt pretty good about it. I’ve officially contributed to open source, hurrah! I wanted to show the mentors of the project that I am capable and skilled at code, friendly and helpful with others, and willing to listen to help and suggestions that they give. Public Lab really values cooperation and working as a team, it’s more than just submitting a contribution and being done. Then look around and decide what larger contributions you could make, both as a learning experience and also to show what you are capable of. I looked for something that was a challenge and a little intimidating, but not so complicated I would be completely stuck. I aimed to complete different types of issues: I had some in Rails, some CSS, and some Javascript. This not only was good to show my abilities, but also to remind myself that I am capable of taking on different challenges and succeeding!

I had prior experience using GIT for my own project, but I had never contributed to open source before. Public Lab has instructions and a posted Git Workflow; I don’t know if all open source projects have that available, but it sure is helpful! I also watched some videos on GIT (there are tons available on YouTube, I’m sure most are good). One that stood out was Advanced Git Tutorial. It really broke it down into the pieces I needed to understand (mostly) what was happening. Our workflow involves Rebase and I think it would be easy to just follow the instructions without understanding what it is really doing – and I don’t like not understanding the big picture. A big part of this internship process is challenging yourself to try new things and build new skills. Take the time to dig into subjects, figure out what’s going on.

Once I got some pull requests in I felt good about my chances but also very nervous about who else was applying. I stayed active all month, spending my time as if I was an intern and I was working on projects. I also spent a lot of time building my proposal, which required me to look at the work they wanted done and then break it down and organize it. I was unsure how that would translate in the real internship since Public Lab is using two interns to work on a group project, but I based my proposal on what I would do if I were working by myself.

The final application was due on Nov 4. It was a relief to be done with the proposal and have it out of my hands, but it was also a tough wait, not knowing if I was going to be working in December or not. On November 26 the emails were sent out and I learned that I had been selected for an internship spot! I began working on December 2; the project ends on March 3. I am working on integrating geolocation and mapping features on PublicLab.org! I am so excited to have this opportunity!

Tools I Am Using

I am using Notion to create a wiki page of notes and links to the project’s workflow, testing documentation, style guide, and planning posts. Initially I set up a To-Do template for listing the issues/pull requests I’m working on, but once we started the Outreachy project I used the Roadmap template. I love that it has space for notes on each issue, and I can add and remove properties to make it what I need. Plus I added my colleague intern so we can work on the project together and stay informed of each others progress.

One of the other tools I am using daily is the FireShot Chrome Extension screenshot tool. It gives you the option of saving an image of just the visible portion of the webpage, saving the entire webpage from top to bottom, or a selection chosen with the mouse. I am really happy with the free version. I know there are a lot of different screenshot tools out there, just find one that works for you because you will need it frequently when working in open source. Public Lab prefers to have a screenshot of any UI changes before merging anything, and of course it’s a very good idea to post a screenshot when you have discovered a bug, or when something you tried is having odd results. It is so much easier for someone to help you when they can see what is happening!

I have used Photoshop many times in the past for design and photography editing, so I figured I was all set. While creating my proposal I did use it for a few graphics, but most of the mockups and graphics I did in Adobe XD, which I am complete new to. I have to rave about it, it makes life so much easier! It automatically snaps to lining elements up and matching margins. It’s so quick to add text and rounded borders, to do all the things that we do for websites. You can do it in Photoshop, but it’s persnickety. The only downside to both apps is that I have to boot into Windows to use them – and my Windows installation is very sluggish at the moment, I think it needs a fresh install. But after my last experience with installing an OS I’m not feeling very eager!

Javascript: Trigger AJAX Success From Function

Last week I had an interesting challenge come up in regard to AJAX. I wrote an .ajax call so javascript submit data in lieu of using a form that existed. I grabbed the form’s action attribute to use as the URL and sent the data.

$.ajax({
    url: $('#submit-form').attr('action'),
    data: { name: 'generic' }
});

The problem was that there was a response from the controller of JSON data that needed to be processed. Elsewhere the form already had a function that triggered on ajax:success.

$('#submit-form').bind('ajax:success', function(e, response){
    // do stuff
});

But because the form wasn’t directly sending the new .ajax call it wasn’t receiving the response to trigger the bound function. The challenge was that the form was still being used and couldn’t be touched, and obviously I didn’t want to replicate the code in my ajax call. I thought about removing the code to a helper function and calling it from both places.

But then I found my answer: I can manually trigger that bound function from elsewhere in the code.

$.ajax({
    url: $('#submit-form').attr('action'),
    data: { name: 'generic' },
    success: (event, success) => {
        $('#submit-form').trigger('ajax:success', event);
    }
});

Nice and simple and I never had to touch the original code!

Setting Up A Dual-Boot System for Outreachy

About a month ago I learned about Outreachy Internships from my Women in Web Dev group. An internship sounded really exciting! I’m building on my coding skills every day but what I really need is some experience working with others. Open Source is something I’ve wanted to do but wasn’t sure how to start; this opportunity was just the nudge I needed. I filled out my initial application and waited to hear back. On October 1st I received an email saying my application was accepted and it was time to start contributing!

I looked through all of the projects available – it was hard to narrow it down – and decided on PublicLab.org. I really love the reason behind the non-profit, science outreach is something I feel strongly about. Their list of requirements seemed to match well with what I know and I’ve been doing, so it seemed like a really good fit to me. I joined the chat and started looking through their GitHub repos. Their main repo is in Ruby, which I don’t have experience with, but Rails is set up very much the same as Laravel. I was excited to look at all the code structure and realize that it felt very familiar.

Getting started on this project was a very big challenge – bigger than I anticipated. I have not in the past had any issues setting up developing environments for Laravel or Python. Ruby on Rails, however… did not go well. I got it all installed with some trial and error but unfortunately the server simply wouldn’t run on my Windows machine. I tried using the Windows Subsystem for Linux, which sounded like the perfect solution: I would use Linux to run the server and have access to the full linux terminal, but I could do my developing in my Windows system. But the server still wouldn’t start. Googling the question “how to run Rails on Windows” generally returned the answer of: “don’t.”

I needed to dual-boot. Considering I’d just installed Ubuntu on a server for my other project I decided to stick with that. Partitioning my drive wasn’t hard; creating a live-boot usb of Ubuntu wasn’t hard; even booting up into the live-boot wasn’t hard! I started installing Ubuntu into my empty partition and expected I’d be done in about an hour. It didn’t go so well.

The first problem was that I couldn’t get back into Windows. I should have been able to update the boot record to see both options and just switch back and forth – that didn’t work. A lot of google searching and experimenting later I figured out that for some reason my Windows installation was using MBR, but Ubuntu was using GBT/EUFI. They didn’t play well together, I had to change one. I chose to update Windows to EUFI, which thankfully was easy with this guide. Unfortunately when updating the BIOS to EUFI I managed to break it and my computer wouldn’t boot – just a black screen. So my 7 year old got to help me open up my computer tower and help me reset the CMOS battery. 🙂 That fixed the BIOS and off I went to update the boot record in Ubuntu, now that it could see Windows on the drives. Seeing a dual-boot screen was delightful.

The second problem was far more frustrating: Ubuntu kept freezing. I would be able to install something, or log in and take a look around, or open up a program… and it would completely lock up. I had to hard reset every time, which makes me wince. Now this freezing created several bigger problems when it would freeze in the middle of installing something. I ended up with a bunch of errors and couldn’t really do much of anything before I’d have to restart. The couple of times it froze while I was updating the boot record trying to fix problem #1 had pretty bad results. I learned a lot about live-boot usbs and repairing boot sectors.

The problem was I didn’t have any idea why it was freezing. Was it because my motherboard is pretty old? Was there something wrong with the drive? Was there something wrong with the Ubuntu installation drive? I’m not familiar enough with Linux to do all kinds of system checking. One site said to upgrade the kernel to fix freezing problems but that seemed kind of risky for me to try. Then on one website I saw someone mention an issue with their graphic card drivers. That was worth a try, so I learned how to check my driver versions and update them from the generic to the nvidia versions. And just like that, like someone had waved a magic wand, the problem was fixed. Of course I didn’t know for sure if it had or not, I had to just stare at the screen while holding my breath for an increasing length of time. The longer it went without freezing the better I felt about my chances. At the end of the evening I was confident all was well!

So lesson learned, next time I install Ubuntu update the video drivers first.

After that I had no problems and could easily install Ruby and Rails and get the repo cloned and working. I did have to get my SSH keys set up again, installed Chrome, installed a text editor (I chose VSCode this time, vs Sublime I was using in Windows), and generally get things set up the way I like it. But so far it’s going really well. The best discovery was learning that I can access all of my Windows files from Ubuntu. That makes life a whole lot easier!

And the best part is now the Rails server will run!

Using Image Intervention in Laravel to Resize Uploads

One of the requirements of my Laravel project is allowing users to upload a photo and post it. I didn’t want to set a low upload size because I know most of the users are going to be using their phone camera and those files can be very large. My solution was to allow large upload sizes but then resize the image before saving the file.

First we need to install Image Intervention in our project:


composer require intervention/image

In config/app.php we need to add this under providers:


'Intervention\Image\ImageServiceProvider',

and this under aliases:

 'Image' => 'Intervention\Image\Facades\Image', 

At the top of the Controller we need to declare it:

use Intervention\Image\ImageManagerStatic as Image;

Now that it’s all installed and ready to use we can look at the code for saving our image. This is how we would normally save an image:

if($request->file('uploadedFile')) {
       $image = $request->file('uploadedFile');
       $fileNameWithExt = $image->getClientOriginalName();
       $fileName = pathinfo($fileNameWithExt, PATHINFO_FILENAME);
       $extension = $image->getClientOriginalExtension();
       $fileNameToStore = $fileName."_".time().'.'.$extension;
       $image->storeAs('public/uploaded_images', $fileNameToStore);
}

With Image Intervention it looks like this:

if($request->hasFile('post_image')) {
    $image = $request->file('post_image');
    $fileNameWithExt = $image->getClientOriginalName();
    $fileName = pathinfo($fileNameWithExt, PATHINFO_FILENAME);
    $extension = $image->getClientOriginalExtension();
    $fileNameToStore = $fileName."_".time().'.'.$extension;
    $path = public_path('storage/uploaded_images/' . $fileNameToStore);
    Image::make($image->getRealPath())
        ->resize(800, 500, function ($constraint) {
            $constraint->aspectRatio();
            $constraint->upsize();
        })
        ->save($path);
    }

Note the path. I had to try a few different ways of specifying where it was going to be saved, this is what works for Laravel.

The resize() function documentation is here:
http://image.intervention.io/api/resize
I used a callback function for two constraints: aspectRatio(), which resizes the image but keeps the original aspect ratio, and upsize(), which prevents the image from being upsized to the given size if the original is smaller.

Now the uploaded images are resized so they aren’t taking up a ton of storage space, they fit perfectly in the space I need them to, and the ratio isn’t distorted. If the uploaded image is smaller than the available space then they will display at their original size.

Laravel on Vultr with Ubuntu/Nginx

Finally! Time to push my Laravel project to a live server. I knew this was going to be quite a process so I was pushing it off until I had time to dedicate to it.

Setting up Ubuntu, Nginx, Laravel on a Vultr Server

I am not going to write up an entire explanation because mostly I followed this very well written guide here:
https://devmarketer.io/learn/deploy-laravel-5-app-lemp-stack-ubuntu-nginx/

Updating to PHP 7.2

Unfortunately I realized I needed PHP 7.2 for my app and the server had 7.0. (In hindsight I probably should have chosen Ubuntu 18.) I found this guide for updating the PHP version:
https://ayesh.me/Ubuntu-PHP-7.2

sudo apt install php7.2 php7.2-common php7.2-cli php7.2-fpm

I had to update some additional PHP modules as well:

sudo apt install 7.2-zip php7.2-mbstring php7.2-mysql php7.2-gd

To update the Nginx config to point to the 7.2 socket you need to edit the default file and change one line:

sudo nano /etc/nginx/sites-available/default

/run/php/php7.2-fpm.sock

When it was all set I followed the instructions for removing the PHP 7.0 version but something went awry and deleted more than it should. I had to re-install some things but finally it was all running properly!

Domain Name Server

Pointing my domain name to my new server was easy with Vultr:
https://serverpilot.io/docs/how-to-configure-dns-on-vultr

File Permissions

And then here’s when the fun started. I ran into file permissions errors. At first glance on google there are obvious steps to take, which solved some – but not all – of my errors. It was really frustrating because it worked perfectly fine in local development but was failing on file upload in production.

Here’s what I learned.

First, make sure to set the storage link on the production server:

php artisan storage:link

Second, don’t ever set a folder permission to 777 like half of the google results say to. It should be set to 775:

sudo chmod -R 775 /var/www/laravel/storage

Unfortunately I still had one file upload section that wasn’t working – but the others were! After reading many ideas and trying various things what finally worked for me was changing the ownership of the folders to www-data, which is the user that fps uses:

sudo chown -R www-data:www-data /var/www/laravel/storage

After all of that I discovered that some of my code had a bug in it – of course – but was otherwise working! Next time I will write about using Image Intervention for saving the images.

CKEditor Dark Theme

Yesterday I wanted to adapt the CKEditor in my Laravel project for a dark background. (Note that while this will work with any CKEditor, not just in Laravel.) That shouldn’t be a problem, I found and installed the Moono Dark Theme.
* In Laravel the skins folder is located at public/vendor/unisharp/laravel-ckeditor
And don’t forget to add this line to the config.js file to enable the skin:

config.skin = 'moono-dark';

I thought I was done, but it looked like this:

I love the bar with the buttons, that’s perfect. But the background of the text area is still white and that’s no good, it’s very harsh on the dark site. It should be easy to find a css selector for it and change it though, right? As it happens… no. No it is not easy. Google showed me a lot of people who had the same problem and solutions that didn’t work. I updated CKEditor to 4.12 hoping that would make a difference, but it did not.

Here’s how I changed it.

First I decided to turn off the iframe by installing the Div Editing Area plugin. I’m sure there’s a way to change the iframe but for this method you’ll have to install the plugin first.
*In Laravel the plugins folder is located at: public/vendor/unisharp/laravel-ckeditor/plugins
Enable the plugin in the config.js file:

config.extraPlugins = 'divarea';

So now you can go in to /skins/moono-dark/editor.css and replace the following classes. I didn’t change all the content here, but to make it easier I’ve included everything within the affected classes so you can copy and paste to replace them. Now I’m sure I could have overridden these things in my project’s css file, but I wanted to keep it contained so that I can re-use it for other projects.

.cke_bottom {
	padding: 6px 8px 2px;
	position: relative;
	border-top: 1px solid #0d0d0d;
	-moz-box-shadow: 0 1px 0 rgba(255, 255, 255, 0.15) inset;
	-webkit-box-shadow: 0 1px 0 rgba(255, 255, 255, 0.15) inset;
	box-shadow: 0 1px 0 rgba(255, 255, 255, 0.15) inset;
	background: #1f1f1f;
	background-image: -webkit-gradient(
		linear,
		left top,
		left bottom,
		from(#333),
		to(#1f1f1f)
	);
	background-image: -moz-linear-gradient(top, #333, #1f1f1f);
	background-image: -webkit-linear-gradient(top, #333, #1f1f1f);
	background-image: -o-linear-gradient(top, #333, #1f1f1f);
	background-image: -ms-linear-gradient(top, #333, #1f1f1f);
	background-image: linear-gradient(top, #333, #1f1f1f);
	filter: progid:DXImageTransform.Microsoft.gradient(gradientType=0,startColorstr='#ebebeb',endColorstr='#cfd1cf');
}
.cke_wysiwyg_div {
	display: block;
	height: 100%;
	overflow: auto;
	padding: 10px;
	outline-style: none;
	-moz-box-sizing: border-box;
	-webkit-box-sizing: border-box;
	box-sizing: border-box;
}
.cke_wysiwyg_div {
	background-color: #000;
	color: #CCC;
}
.cke_inner {
	display: block;
	-webkit-touch-callout: none;
	background: #000;
	padding: 0;
}
.cke_path_item,
.cke_path_empty {
	display: inline-block;
	float: left;
	padding: 3px 4px;
	margin-right: 2px;
	cursor: default;
	text-decoration: none;
	outline: 0;
	border: 0;
	color: #888;
	text-shadow: 0 1px 0 #000;
	font-weight: bold;
	font-size: 11px;
}
a.cke_path_item:hover,
a.cke_path_item:focus,
a.cke_path_item:active {
	text-decoration: none;
	background-color: rgba(0, 0, 0, 0.2);
	color: #888;
	text-shadow: 0 1px 0 rgba(0, 0, 0, 0.5);
	-moz-border-radius: 2px;
	-webkit-border-radius: 2px;
	border-radius: 2px;
	-moz-box-shadow: 0 0 4px rgba(0, 0, 0, 0.5) inset,
		0 1px 0 rgba(255, 255, 255, 0.5);
	-webkit-box-shadow: 0 0 4px rgba(0, 0, 0, 0.5) inset,
		0 1px 0 rgba(255, 255, 255, 0.5);
	box-shadow: 0 0 4px rgba(0, 0, 0, 0.5) inset,
		0 1px 0 rgba(255, 255, 255, 0.5);
}
/* add this class in if you want to fix the spacing at the top */
/* my custom theme already formats paragraph tags so this you might not need it */
.cke_inner p:first-child {
	margin-top: 0;
}

This will change the main background, the border, and the bar at the bottom with the path items. Now it looks like this!

I hope this helps someone! Let me know if you’ve figured out a solution for the iframe version.

Who am I and what am I doing here?

My name is Natalie and I’m a Web Developer. I’ve started this blog to share my progress, various tips and code, and whatever else may be relevant.

First of all, let’s back up. I learned HTML back in high school. I decided to get a Bachelors of Information Systems at college, learning things like object oriented programming, relational databases and networking. At the same time I was working on my web sites and doing small projects for people. I really enjoyed object oriented programming in Java, but I loved web programming. After college I moved to the US from Canada and started doing freelancing. I did a lot of full stack with WordPress and PHP, plus all the design work.

At that point in my story my husband and I started a family and funny enough babies kind of take over everything. I decided to stay home with the kids while they were little, and it has been some wonderful years. My kids are some of the best people I know and it has been a privilege to spend my days with them.

Now – I could say “finally,” because some days it feels like that, but it also feels like they were just born yesterday – my kids are both in school full time. I realized I know what I want to fill my time with: coding!

So here has been my journey so far:

  • Refresh my skills in JavaScript and PHP
  • Learn new things: JavaScript ES6, jQuery, CSS3, SASS
  • Learn GIT and GITHub
  • Try out a new framework – I chose Laravel so I could build a larger project with RESTful routing
  • Familiarize myself with the latest version of WordPress

I’ve been having so much fun! Many of the new features available now are really cool. Grid and Flexbox are amazing. jQuery is simple and powerful. Git honestly seems like magic.

Right now I am working on a Laravel project, which is nearing completion. I’m very excited to be able to share it soon! When that is done I’ll be completely re-designing my portfolio site. After that is done I would like to tackle React or Vue.

At first I didn’t want to start a blog because I didn’t want to put myself out there; I’m not an expert (yet), and I’m certainly not going to pretend to be one. I decided, however, that I do have something worth sharing, and at the very least I may help others out who are on a similar journey. I’m continually impressed by the web dev community and how helpful everyone is, so here’s hoping to find my own little corner. Thanks for stopping in!