<![CDATA[Coding Fundas]]>http://codingfundas.com/http://codingfundas.com/favicon.pngCoding Fundashttp://codingfundas.com/Ghost 1.23Sat, 16 Jan 2021 23:40:43 GMT60<![CDATA[How to connect your ExpressJS app with Postgres using Knex]]>http://codingfundas.com/how-to-connect-your-expressjs-app-with-postgres-using-knex/5ffe1194c1c056700c87c666Sat, 16 Jan 2021 23:25:34 GMTHow to connect your ExpressJS app with Postgres using Knex

Note: I've created a video for this tutorial if you'd like to check that out here. Also, the supporting code for the same can be found here

Express is one of the most popular JavaScript framework for building backend APIs and Postgres is a really popular relational database. How do we connect the two?

If you look at the official documentation for Express, you'll see the section like this:

var pgp = require('pg-promise')(/* options */)
var db = pgp('postgres://username:password@host:port/database')

db.one('SELECT $1 AS value', 123)
  .then(function (data) {
    console.log('DATA:', data.value)
  })
  .catch(function (error) {
    console.log('ERROR:', error)
  })

It works for sure but it's not the way you would write it in a full fledged production application. Some of the questions that come to mind are:

  • How do you create the tables in the database?
  • How do you track changes to the database? For example, when you alter a table or create a new table. Or create/drop an index on a field. How to keep track of all these changes in your git/cvs/svn repository?
  • What if you switch from Postgres to some other database in future, say MariaDB for example? Do all your queries still work?

There might be a lot more questions but to me, the most important one feels like keeping track of changes to database in your application codebase. If someone clones my repository to their local system, they should have a command to create all the database tables on their local setup. Also, as we make changes to the database like adding/dropping tables or indices or altering any of the tables, one should be able to run a single command to sync their local copy of the database structure with the same on production DB. I am talking about structure, not the data. All the tables on the local database should have the same structure as that in the production database to make the testing of your application easy on local machine. And if you don't have this sync mechanism automated, you're likely to run into a lot of issues that you'll be troubleshooting in production.

To solve for these problems, we have libraries like Knex and Sequelize. These libraries provide a very neat API for writing SQL queries which are database agnostic and prevent issues like SQL injection attacks. They also provide transaction support to handle complex DB operations and streaming API to handle large volumes of data in a script. Also, to keep track of structural changes to your database in your code repo, these libraries use the concept of migrations. Migrations are files where you write structural changes you want to make to your database. For example, let's say you have a users table and want to alter the table to add a new column gender. You can write a Knex migration file like this:

exports.up = knex => knex.schema
  .alterTable('users', (table) => {
    table.string('gender')
  });

exports.down = knex => knex.schema
  .alterTable('user', (table) => {
    table.dropColumn('gender');
  });

The up function defines what to do when we run the migration and down function defines what to do when we rollback the migration. You can run the migration like this:

knex migrate:latest

And you can roll it back like this:

knex migrate:rollback

Once you commit this file to your code repository, your other team members can pull the changes from the repo and run these commands at their end to sync up the database structure on their machines.

In order to keep track of the database changes (migrations), Knex creates a few extra tables which contain information about what all migrations have been applied. So, for example if one of your team members hasn't synced their database in a long time and there are say 10 new migration scripts added since the last time they synced, when the pull the latest changes from the repo and run the migration command, all those 10 migrations will be applied in the sequence they were added to the repository.

Anyway, coming back to the main topic of this post. How do we add knex to our ExpressJS app and how do we use it to connect to our postgres database? Before we dive into this, there are some pre-requisites that should be met

Pre-Requisites

  • Node JS version 8 or higher intalled
  • Postgres installed on our localhost:5432

Steps

We will divide this article into following steps:

  • Creating the Express app
  • Creating the API endpoint with some hard coded data
  • Creating a database for our app
  • Installing and configuring knex
  • Populating seed data with knex
  • Updating the API endpoint created in step 2 to fetch the data from database instead of returning hard coded data

For this tutorial, we will be using Ubuntu Linux but these instructions should work fine other operating systems as well.

So, without further ado, let's get started with creating our Express app.

Step 1: Creating the Express app

Open the terminal (command prompt or Powershell on Windows), navigate to the directory where you want to create this project and create the project directory. We will be calling our project express-postgres-knex-app (not very innovative I know :-))

mkdir express-postgres-knex-app

Go to the project directory and run the following command to generate some boilerplate code using express generator

npx express-generator

The output should look like this:


   create : public/
   create : public/javascripts/
   create : public/images/
   create : public/stylesheets/
   create : public/stylesheets/style.css
   create : routes/
   create : routes/index.js
   create : routes/users.js
   create : views/
   create : views/error.ejs
   create : views/index.ejs
   create : app.js
   create : package.json
   create : bin/
   create : bin/www

   install dependencies:
     $ npm install

   run the app:
     $ DEBUG=express-postgres-knex-app:* npm start

This will create the some files and directories needed for a very basic Express application. We can customize it as per our requirements. Among other things, it will create an app.js file and a routes directory with index.js and users.js files inside. In order to run our application, we need to follow the instructions in the output shown above. First, install the dependencies:

npm install

Then run the app using the following command:

DEBUG=express-postgres-knex-app:* npm start

This should start our server on port 3000. If you go to your browser, you should be able to see the express application on http://localhost:3000

How to connect your ExpressJS app with Postgres using Knex

Step 2: Creating the API endpoint with some hard coded data

The express generator automatically created a users router for us. If you open the file routes/users.js, you should see the code like this:

var express = require('express');
var router = express.Router();
const DB = require('../services/DB');

/* GET users listing. */
router.get('/', async function (req, res, next) {
  return res.send('respond with a resource');
});

module.exports = router;

Here, we need to return the users array instead of a string respond with a resource. And we need to fetch those users from our database. So, for step 2, we don't need to do anything as we already have a route created for us by express generator. In the later steps, we will modify this code to actually fetch the users from our database

Step 3: Creating a database for our app

In this tutorial, we have a pre-requisite that postgres is installed on your machine. So, we need to connect to the postgres server and once you're inside, run the following command to create the database for our app:

create database express-app;

Step 4: Installing and configuring knex

Install knex and pg modules (since we are using postgres) by running the following command:

npm install knex pg

Once installed, initialize knex with a sample config file:

knex init

This should create a knexfile.js file in your project's root directory. This file contains the configuration to connect to the database. By default, the knexfile will be using sqlite for development. We need to change this since we are using postgres

Modify your knexfile.js so it looks like this:

// Update with your config settings.
const PGDB_PASSWORD = process.env.PGDB_PASSWORD;

module.exports = {
  development: {
    client: 'postgresql',
    connection: {
      host: 'localhost',
      database: 'express-app',
      user: 'postgres',
      password: PGDB_PASSWORD
    },
    pool: {
      min: 2,
      max: 10
    },
    migrations: {
      tableName: 'knex_migrations',
      directory: `${__dirname}/db/migrations`
    },
    seeds: {
      directory: `${__dirname}/db/seeds`
    }
  }
};

Now, we need to create a service called DB where we initialize knex in our application with the config from knexfile.js. In the project's root directory, create a directory services and inside the services directory, create a file DB.js

In that file, add the following code:

const config = require('../knexfile');

const knex = require('knex')(config[process.env.NODE_ENV]);

module.exports = knex;

Here, we are importing the config from knexfile and initializing the knex object using the same. Since, we will be running our app in development mode, the value of NODE_ENV will be development and the config for the same will be picked from the knexfile.js. If you run the app in production, you'll need to add the production config in the knexfile.js.

Now, wherever in our app we need to pull data from the database, we need to import this DB.js

Step 5: Populating seed data with knex

So we have our express app up and running with knex integrated. And we have our postgres database created. But we don't have any tables and data in our database. In this step, we will use knex migrations and seed files to do the same.

From the project's root directory, run the following commands:

npx knex migrate:make initial_setup

This will create a new file in the db/migrations directory.

npx knex seed:make initial_data

This will create a sample seed file under the db/seeds directory. First, we need to modify our migration file to create the users table. Open the newly created file under db/migrations directory and modify it so it looks like this:

exports.up = function (knex) {
  return knex.schema.createTable('users', function (table) {
    table.increments('id');
    table.string('name', 255).notNullable();
  });
};

exports.down = function (knex) {
  return knex.schema.dropTable('users');
};

Here, in the up function, we are creating a users table with two fields: id and name. So, when we apply this migration, a new table will be created. And in the down function, we are dropping the users table. So, when we rollback our migration, the users table will be deleted.

Also, open the newly created file under db/seeds directory and modify it so it looks like this:

exports.seed = function (knex) {
  // Deletes ALL existing entries
  return knex('users')
    .del()
    .then(function () {
      // Inserts seed entries
      return knex('users').insert([
        { id: 1, name: 'Alice' },
        { id: 2, name: 'Robert' },
        { id: 3, name: 'Eve' }
      ]);
    });
};

This will first remove any existing entries from our users table and then populate the same with 3 users.

Now, that we have our migration and seed files ready, we need to apply them. Run the following command to apply the migration:

npx knex migrate:latest

And then run the following command to populate the seed data:

npx knex seed:run

Now, if you connect to your postgres database, you should be able to see the users table with 3 entries. Now that we have our users table ready with data, we need to update the users.js file to fetch the entries from this table.

Step 5: Updating the API endpoint created in step 2 to fetch the data from database instead of returning hard coded data

Open the file routes/users.js and modify the API endpoint to look like this:

var express = require('express');
var router = express.Router();
const DB = require('../services/DB');

/* GET users listing. */
router.get('/', async function (req, res, next) {
  const users = await DB('users').select(['id', 'name']);
  return res.json(users);
});

module.exports = router;

Here, in the 3rd line we are importing the DB service. Then inside our route handler, we are fetching the users using the Knex's query builder

const users = await DB('users').select(['id', 'name']);

Knex does the job of translating this to an SQL query:

SELECT id, name FROM users;

And then we return the users (array of JSON objects) to the response.

Now, go to the terminal where you started the application earlier. Stop the server. If you remember in the knexfile we created earlier, we were using an environment variable PGDB_PASSWORD for passing the postgres password to our config. So we will need to export this variable with the password of our postgres server

export PGDB_PASSWORD=<enter your postgres password here>

Then run the Express server again

DEBUG=express-postgres-knex-app:* npm start

Now if you go to the http://localhost:3000/users , you should see the JSON array of user objects fetched from your postgres database.

Conclusion

So, in this article we created an Express JS app and connected it with a postgres database using Knex. We also touched upon the benefits of using a robust library like Knex for handling database operations in our application and learned about the concept of migrations. Hope you found this article helpful

]]>
<![CDATA[How to install Elasticsearch 7 with Kibana using Docker Compose]]>http://codingfundas.com/how-to-install-elasticsearch-7-with-kibana-using-docker-compose/5f7ea487b489a120add3afcaThu, 08 Oct 2020 08:33:01 GMTHow to install Elasticsearch 7 with Kibana using Docker Compose

This tutorial will help you setup a single node Elasticsearch cluster with Kibana using Docker Compose.

Pre-requisites

This tutorial assumes you are comfortable with Docker and Docker Compose. If you are not, you can go through this article of mine which is kind of a crash course with Docker Compose (https://medium.com/swlh/simplifying-development-on-your-local-machine-using-docker-and-docker-compose-2b9ef31bdbe7?source=friends_link&sk=240efed3fd3a43a1779e7066edb37235)

Video Lesson

I have also created a video tutorial for this on my YouTube channel. If you prefer that, you may visit the link below and check it out

https://www.youtube.com/watch?v=EClKhOE0p-o

Step 1: Create docker-compose.yml file

Create a directory on your machine for this project

mkdir $HOME/elasticsearch7-docker
cd $HOME/elasticsearch7-docker

Inside that directory create a docker-compose.yml file with contents as shown below

version: '3'
services:
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.9.2-amd64
    env_file:
      - elasticsearch.env
    volumes:
      - ./data/elasticsearch:/usr/share/elasticsearch/data

  kibana:
    image: docker.elastic.co/kibana/kibana:7.9.2
    env_file:
      - kibana.env
    ports:
      - 5601:5601

Step 2: Create the env files

Both Elasticsearch and Kibana docker images allow us to pass on environment variables which are passed on to the configuration as defined in elasticsearch.yml and kibana.yml files. For passing the environment variables to container, we can use the env_file setting of the docker compose file.

Create the elasticsearch.env file:

cluster.name=my-awesome-elasticsearch-cluster
network.host=0.0.0.0
bootstrap.memory_lock=true
discovery.type=single-node

Note: With latest version of Elasticsearch, it is necessary to set the option discovery.type=single-node for a single node cluster otherwise it won't start

Create kibana.env file

SERVER_HOST="0"
ELASTICSEARCH_URL=http://elasticsearch:9200
XPACK_SECURITY_ENABLED=false

Step 4: Create Elasticsearch data directory

Navigate to the directory where you have created your docker-compose.yml file and create a subdirectory data. Then inside the data directory create another directory elasticsearch.

mkdir data
cd data
mkdir elasticsearch

We will be mounting this directory to the data directory of elasticsearch container. In your docker-compose.yml file there are these lines:

    volumes:
      - ./data/elasticsearch:/usr/share/elasticsearch/data

This ensures that the data on your Elasticsearch container persists even when the container is stopped and restarted later. So, you won't lose your indices when you restart the containers.

Step 4: Run the setup

We're good to go now. Open your terminal and navigate to the folder containing your docker-compose.yml file and run the command:

docker-compose up -d

This will start pulling the images from docker.elastic.co and depending on your internet speed, this should take a while. Once the images are pulled, it will start the containers.

You can run the following command to see if both the containers are running:

docker-compose ps

The output should look something like this

                Name                              Command               State           Ports         
------------------------------------------------------------------------------------------------------
docker-elasticsearch-setup_elasticsearch_1   /tini -- /usr/local/bin/do ...   Up      9200/tcp, 9300/tcp    
docker-elasticsearch-setup_kibana_1          /usr/local/bin/dumb-init - ...   Up      0.0.0.0:5601->5601/tcp

Notice the State field. It should be Up for both the containers. If it is not, then check the logs using the command (replace {serviceName} with the name of the service, eg elasticsearch or kibana)

docker-compose logs -f {serviceName}

A common error that you might encounter is related to vm.max_map_count being too low. You can fix it by running the command

sysctl -w vm.max_map_count=262144

Check this link for more details https://www.elastic.co/guide/en/elasticsearch/reference/current/vm-max-map-count.html

If both the services are running fine, you should be able to see kibana console on http://localhost:5601 on your web browser. Give it a few minutes as it takes some time for Elasticsearch cluster to be ready and for Kibana to connect to it. You can get more info by inspecting the logs using docker-compose logs -f kibana command.


This completes our setup. Hope you found it helpful. Happy coding :-)

]]>
<![CDATA[Cloning an object in JavaScript and avoiding Gotchas]]>http://codingfundas.com/cloning-an-object-in-javascript/5f6faaa04ad04521ed6a517eSun, 27 Sep 2020 07:51:01 GMTCloning an object in JavaScript and avoiding Gotchas

If you're a JavaScript developer, you must have come across scenarios where you need to clone an object. How do you do it? In this article we will cover various approaches to clone an object in JavaScript and their shortcomings and finally talk about the most reliable way to make a deep copy (clone) of an object in JavaScript.

Let us consider that our object to be cloned is this:

const person = {
  name: 'Dolores Abernathy',
  age: 32,
  dob: new Date('1988-09-01')
}

There can be various ways to clone it:

One way would be to declare a new variable and point it to the original object (which is not exactly cloning the object)

const clone = person

What you're doing here is you are referencing the same object. If you change clone.name, person.name will also change. Most of the times, this is not what you intend to do when you want to clone an object. You would want a copy of the object which does not share anything with the original object. Here, clone is just a reference to the same object being referred by person. Most of the JavaScript developers would know about this. So, this is not really a "Gotcha!". But the next two approaches I am going to show are definitely something you need to watch out for.

You'll often see code using spread operator to clone an object. For example:

const clone = { ...person }

Or code using Object.assign like this

const clone = Object.assign({}, person)

One might assume in both of the above cases that clone is a copy of the original person object and does not share anything with the original object. This is partially correct but can you guess the output of the code below? (Please take a moment to think what the output should be before copy pasting it)

const person = {
  name: 'Dolores Abernathy',
  age: 32,
  dob: new Date('1988-09-01')
}

const clone = { ...person }

// change the year for person.dob
person.dob.setYear(1986)

// check the clone's dob year
console.log(clone.dob.getFullYear())

What was your guess? 1988?

Cloning an object in JavaScript and avoiding Gotchas

The correct answer is 1986. If you guessed the right answer and know the reason behind it, good! You have strong JavaScript fundamentals. But if you guessed it wrong, that's ok. It's the reason why I am sharing this blog post because a lot of us assume that by using the spread operator, we are creating a completely separate copy of the object. But this is not true. Same thing would happen with Object.assign({}, person) as well.

Both these approaches create a shallow copy of the original object. What does that mean? It means that all the fields of the original object which are primitive data types, will be copied by value but the object data types will be copied by reference.

In our original object, name and age are both primitive data types. So, changing person.name or person.age does not affect those fields in the clone object. However, dob is a date field which is not a primitive datatype. Hence, it is passed by reference. And when we change anything in dob field of the person object, we also modify the same in clone object.

How to create a deep copy of an object ?

Now that we know that both the spread operator and the Object.assign method create shallow copies of an object, how do we create a deep copy. When I say deep copy, I mean that the cloned object should be a completely independent copy of the original object and changing anything in one of those should not change anything in the other one.

Some people try JSON.parse and JSON.stringify combination for this. For example:

const person = {
  name: 'Dolores Abernathy',
  age: 32,
  dob: new Date('1988-09-01')
}

const clone = JSON.parse(JSON.stringify(person))

While it's not a bad approach, it has its shortcomings and you need to understand where to avoid using this approach.

In our example, dob is a date field. When we do JSON.stringify, it is converted to date string. And then when we do JSON.parse, the dob field remains a string and is not converted back to a date object. So, while clone is a completely independent copy of the person in this case, it is not an exact copy because the data type of dob field is different in both the objects.

You can try yourself

console.log(person.dob.constructor) // [Function: Date]
console.log(clone.dob.constructor) // [Function: String]

This approach also doesn't work if any of the fields in the original object is a function. For example

const person = {
  name: 'Dolores Abernathy',
  age: 32,
  dob: new Date('1988-09-01'),
  getFirstName: function() {
    console.log(this.name.split(' ')[0])
  }
}

const clone = JSON.parse(JSON.stringify(person))

console.log(Object.keys(person)) // [ 'name', 'age', 'dob', 'getFirstName' ]

console.log(Object.keys(clone)) // [ 'name', 'age', 'dob' ]

Notice that the getFirstName is missing in the clone object because it was skipped in the JSON.stringify operation as it is a function.

What is a reliable way to make a deep copy/clone of an object then?

Up until now all the approaches we have discussed have had some shortcomings. Now we will talk about the aproach that doesn't. If you need to make a truly deep clone of an object in JavaScript, use a third party library like lodash

const _ = require('lodash')

const person = {
  name: 'Dolores Abernathy',
  age: 32,
  dob: new Date('1988-09-01'),
  getFirstName: function() {
    console.log(this.name.split(' ')[0])
  }
}

const clone = _.cloneDeep(person)

// change the year for person.dob
person.dob.setYear(1986)

// check clone's dob year
console.log(clone.dob.getFullYear() // should be 1988

// Check that all fields (including function getFirstName) are copied to new object
console.log(Object.keys(clone)) // [ 'name', 'age', 'dob', 'getFirstName' ]

// check the data type of dob field in clone
console.log(clone.dob.constructor) // [Function: Date]

You can see that the cloneDeep function of lodash library will make a truly deep copy of an object.

Conclusion

Now that you know different ways of copying an object in JavaScript and pros and cons of each approach, I hope that this will help you in making a more informed decision on which approach to use for your use case and avoid any "Gotchas" while writing code.

Happy Coding :-)

]]>
<![CDATA[How to add third party scripts & inline scripts in your Nuxt.js app?]]>

Problem statement

Let's say you have created a Nuxt app and one day your client or your boss asks you to add some snippet of code to every page of the site for analytics purposes. For example:

<!-- Global site tag (gtag.js) - Google Analytics -->
<
]]>
http://codingfundas.com/how-to-embed-inline-and-external-scripts-in-your-nuxt-app/5dff6c67750ceb4499b8ea94Sun, 22 Dec 2019 14:02:06 GMT

Problem statement

Let's say you have created a Nuxt app and one day your client or your boss asks you to add some snippet of code to every page of the site for analytics purposes. For example:

<!-- Global site tag (gtag.js) - Google Analytics -->
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-111111111-1"></script>
<script>
  window.dataLayer = window.dataLayer || [];
  function gtag(){dataLayer.push(arguments);}
  gtag('js', new Date());

  gtag('config', 'UA-111111111-1');
</script>

Solution

Open your nuxt.config.js file and update the head section as follows:

  head: {
    __dangerouslyDisableSanitizers: ['script'],
    script: [
      {
        hid: 'gtm-script1',
        src: 'https://www.googletagmanager.com/gtag/js?id=UA-111111111-1',
        defer: true
      },
      {
        hid: 'gtm-script2',
        innerHTML: `
          window.dataLayer = window.dataLayer || [];
          function gtag(){dataLayer.push(arguments);}
          gtag('js', new Date());

          gtag('config', 'UA-111111111-1');
        `,
        type: 'text/javascript',
        charset: 'utf-8'
      }
    ]
  },

As you can see the script array contains two objects. First one is to include the external script from googletagmanager.com. The second object shows how to include inline Javascript. For that to work however, you need to add the setting __dangerouslyDisableSanitizers: ['script'],. I am not sure if this is the best or even the recommended approach but it worked for me. If you happen to know a better alternative, I would definitely love to know. You can mention in the comments section below or tag me on twitter.

Thanks and happy coding :-)

]]>
<![CDATA[How to add authentication to your universal Nuxt app using nuxt/auth module?]]>

nuxt

Recently I was working on a Nuxt.js app and had to add authentication to it. First thing I thought was to use vuex to store two fields in a state:

  • isLoggedIn: a boolean representing whether user is logged in or not
  • loggedInUser: an object containing the user details for
]]>
http://codingfundas.com/how-to-add-auth-to-your-nuxt-app/5d90b126609cc25cea03deb6Sun, 29 Sep 2019 18:40:27 GMT

nuxt

Recently I was working on a Nuxt.js app and had to add authentication to it. First thing I thought was to use vuex to store two fields in a state:

  • isLoggedIn: a boolean representing whether user is logged in or not
  • loggedInUser: an object containing the user details for the session that we get from server

And then I added a middleware on pages where I wanted to restrict access to logged in users only. The thought process for this approach is right but problem is that when you refresh the page, the state from vuex is lost. In order to handle that, you would need to use localStorage but that would work only if your app is running in spa mode, that is, on client side only. If you are running your app in universal mode (server side rendered) then you will also need to use cookies and write a custom middleware that checks whether it is running on client side or server side and then use localStorage or cookies accordingly. Doing all of this would be a good exercise to learn how everything works but adding it to a project where multiple people are working might not be a great idea in my opinion. Nuxt has an officially supported module just for this purpose. It's the auth module. In this post, I will talk about how to integrate the auth module to your nuxt app to support authentication using email and password.

Assumptions for the server API

We are making the assumption that the API server:

  • Is running on http://localhost:8080/v1
  • Uses cookie based sessions
  • Has a JSON based API
  • Has the following API endpoints:
    • POST /v1/auth/login: accepts email and password in request body and authenticates the user
    • POST /v1/auth/logout: does not need request body and deletes the user session from server
    • GET /v1/auth/profile: returns the logged in user's object

Overview of the steps involved

We will divide this post into following steps:

  • Installation of axios and auth modules
  • Configuration needed in nuxt.config.js
  • Using the state from auth module to check if user is logged in or not and accessing logged in user in our app components
  • Using the auth module to authenticate the user using email and password based authentication
  • Using middleware provided by the auth module to restrict access to pages to logged in users only

Step 1: Install the axios and auth modules

Open the terminal, navigate to the root directory of your project and run the following command:

npm install @nuxtjs/auth @nuxtjs/axios

Step 2: Configure axios and auth modules

Open your nuxt.config.js file, find the modules section and include the axios and auth modules and add their configuration:

  modules: [
    '@nuxtjs/axios',
    '@nuxtjs/auth'
  ],

  auth: {
    strategies: {
      local: {
        endpoints: {
          login: {
            url: '/auth/login',
            method: 'post',
            propertyName: false
          },
          logout: { 
            url: '/auth/logout', 
            method: 'post' 
          },
          user: { 
            url: '/auth/profile', 
            method: 'get', 
            propertyName: false 
          }
        },
        tokenRequired: false,
        tokenType: false
      }
    }
  },
  
  axios: {
    baseURL: 'http://localhost:8080/v1',
    credentials: true
  },
  

The auth object here includes the configuration. The auth module supports local strategy as well as OAuth2. Since we only have email and password based authentication in our case, we only need to provide the configuration for local strategy.

The endpoints section is where we specify the details about our API server's endpoints for login, logout and logged in user's profile and each of the config looks like this:

  user: { 
    url: '/auth/profile', 
    method: 'get', 
    propertyName: false 
  }          

url and method should be consistent with your server API. The url here needs to be relative to the baseUrl config. The propertyName tells the auth module which property in the response object to look for. For example, if your API server reponse for GET /auth/profile is like this:

{
  "user": {
    "id: 1,
    "name": "Jon Snow",
    "email": "jon.snow@asoiaf.com"
  }
}

Then you can set the propertyName as user to look for only the user key in the API response. If you want to use the entire API response, you need to set propertyName to false.

Since our API server has cookie based sessions, we are setting the tokenRequired and tokenType to false.

tokenRequired: false,
tokenType: false

For a complete list of options supported by the auth module, you can visit their official documentation here

The axios object in the above config is used to provide the axios configuration. Here, we are setting the following properties:

  axios: {
    baseURL: 'http://localhost:8080/v1',
    credentials: true
  },

baseUrl here is the root url of our API and any relative url that we hit using axios in our app will be relative to this url. Setting credentials as true ensures that we send the authentication headers to the API server in all requests.

Step 3: Activate vuex store in your app

In order to use the auth module, we need to activate vuex store in our application since that's where the session related information will be stored. This can be done by adding any .js file inside the store directory of your app and nuxt will register a namespaced vuex module with the name of the file. Let's go ahead and add a blank file called index.js to the store directory of our app. It's not mandatory to add index.js file. You could have added any file for example xyz.js in the store directory and that would have activated vuex store in your app.

The auth module that we have included in our project will automatically register a namespaced module named auth with the vuex store. And it has the following fields in the state:

  • loggedIn: A boolean denoting if the user is logged in or not
  • user: the user object as received from auth.strategies.local.user endpoint configured in our nuxt.config.js file.
  • strategy: This will be local in our case

It also adds the necessary mutations for setting the state. So, even though we haven't created any auth.js file in the store directory of our app, the auth module has automatically taken care of all this. If it helps to understand, imagine that a file named auth.js is automatically created by auth module in the store directory of your app even though this file doesn't actually exists. This means that using mapState on auth module of your vuex store will work. For example, you can use this in any of your components or pages:

  computed: {
    ...mapState('auth', ['loggedIn', 'user'])
  },

Here is a complete example of a component using these properties:

<template>
  <b-navbar type="dark" variant="dark">
    <b-navbar-brand to="/">NavBar</b-navbar-brand>
    <b-navbar-nav class="ml-auto">
      <b-nav-item v-if="!loggedIn" to="/login">Login</b-nav-item>
      <b-nav-item v-if="!loggedIn" to="/register">Register</b-nav-item>
      <b-nav-item v-if="loggedIn" @click="logout">
        <em>Hello {{ user.name }}</em>
      </b-nav-item>
      <b-nav-item v-if="loggedIn" @click="logout">Logout</b-nav-item>
    </b-navbar-nav>
  </b-navbar>
</template>

<script>
import { mapState } from 'vuex'
export default {
  name: 'NavBar',
  computed: {
    ...mapState('auth', ['loggedIn', 'user'])
  },
  methods: {
    async logout() {
      await this.$auth.logout()
      this.$router.push('/login')
    }
  }
}
</script>

<style></style>

Alternative approach

Instead of using the mapState, you can also reference the loggedIn and user by this.$auth.loggedIn and this.$auth.user. So, in the above example, you could have re-written the computed properties as mentioned below and it would have still worked fine:

  computed: {
    loggedIn() {
      return this.$auth.loggedIn
    },
    user() {
      return this.$auth.user
    }
  },

Step 4: Authenticating user using the auth module

We know how to use the auth module's APIs to check whether a user is logged in or not, or access the logged in user's details. But we haven't yet covered the part of how to authenticate the user. This is done by using the this.$auth.loginWith method provided by the auth module in any of your components or pages. The first argument to this function is the name of the strategy. In our case this will be local. It's an async function which returns a promise. Here is an example of how to use it:

  try {
    await this.$auth.loginWith('local', {
      data: {
        email: 'email@xyz.com'
        password: 'password',
      }
    })
    // do something on success
  } catch (e) {    
    // do something on failure 
  }

So, typically you would have a login page with a form with email and password fields mapped to data of the component using v-model. And once you submit the form, you can run this function to authenticate using the auth module. Here is an example of login page:

<template>
  <div class="row">
    <div class="mx-auto col-md-4 mt-5">
      <b-card>
        <b-form @submit="submitForm">
          <b-form-group
            id="input-group-1"
            label="Email address:"
            label-for="email"
          >
            <b-form-input
              id="email"
              v-model="email"
              type="email"
              required
              placeholder="Enter email"
            ></b-form-input>
          </b-form-group>

          <b-form-group
            id="input-group-2"
            label="Password:"
            label-for="password"
          >
            <b-form-input
              id="password"
              v-model="password"
              type="password"
              required
              placeholder="Enter password"
            ></b-form-input>
          </b-form-group>

          <b-button type="submit" variant="primary">Login</b-button>
        </b-form>
      </b-card>
    </div>
  </div>
</template>

<script>
export default {
  name: 'LoginPage',
  data() {
    return {
      email: '',
      password: ''
    }
  },
  methods: {
    async submitForm(evt) {
      evt.preventDefault()
      const credentials = {
        email: this.email,
        password: this.password
      }
      try {
        await this.$auth.loginWith('local', {
          data: credentials
        })
        this.$router.push('/')
      } catch (e) {
        this.$router.push('/login')
      }
    }
  }
}
</script>

<style></style>

In order to logout a logged in user, you can use the this.$auth.logout method provided by the auth module. This one doesn't need any arguments. Here is an example:

  methods: {
    async logout() {
      await this.$auth.logout()
      this.$router.push('/login')
    }
  }

Step 5: Using auth middleware to restrict access to certain pages

The auth module also provides middleware to restrict access to logged in users. So, for example if you want to restrict the /profile route of your application to logged in users only, you can add the auth middleware to the profile.vue page like this:

export default {
  name: 'ProfilePage',
  middleware: ['auth']
}

For more details on how you can configure your components and pages to use the auth middleware, you can check out the official docs here.

Conclusion and References

This was kind of a getting started post for axios and auth modules with NuxtJS. We only covered the local strategy but the auth module also supports OAuth2 and can be used to support login using Auth0, Facebook, Github and Google. I would definitely recommend checking out the Guide and API section of the auth module:

https://auth.nuxtjs.org/

The axios module also provides us many configuration options. Although we didn't cover much of it in this post, but I would definitely recommend checking out the official docs for that as well:

https://axios.nuxtjs.org/

I hope this post was helpful in understanding the basics of auth module in Nuxt and makes it easier for you to navigate the rest of the official documentation on your own.

Happy coding :-)

]]>
<![CDATA[How to read or modify spreadsheets from Google Sheets using Node.js ?]]>

First of all, a brief overview of our use case. Let's say I have a spreadsheet on Google Sheets which is not public and I want to be able to read/modify programmatically through some batch process running on my local machine or some server. This is something I had

]]>
http://codingfundas.com/how-to-read-edit-google-sheets-using-node-js/5d85bd7f56a48775eb5e8599Sat, 21 Sep 2019 15:13:31 GMT

First of all, a brief overview of our use case. Let's say I have a spreadsheet on Google Sheets which is not public and I want to be able to read/modify programmatically through some batch process running on my local machine or some server. This is something I had to do recently with a Node.js application and I found the authentication part a bit tricky to understand. So I thought of sharing my solution and I hope it helps someone in need. There might be better ways of doing this but I am sharing what worked best for me.

Since there is no user interaction involved in our use case, we don't want to use the OAuth process where user needs to open a browser and sign in to their Google account to authorize the application. For scenarios like this, Google has a concept of service account. A service account is a special type of Google account intended to represent a non-human user that needs to authenticate and be authorized to access data in Google APIs. Just like a normal account, a service account also has a email address (although it doesn't have an actual mailbox and you cannot send emails to a service account email). And just like you can share a google sheet with a user using their email address, you can share a google sheet with a service account as well using their email address. And this is exactly what we are going to do in this tutorial. We will create a spreadsheet on Google Sheets using a regular user, share it with a service account (that we will create) and use the credentials of the service account in our Node.js script to read and modify that sheet.

Pre-requisites

This tutorial assumes that you have:

  • Experience working with Node.js
  • A Google account
  • A project setup on Google developers console where you have admin priveleges

Steps Overview

Here is the list of steps we will be following through this tutorial:

  1. Create a spreadsheet on Google sheets
  2. Enable Google Sheets API in our project on Google developers console
  3. Create a service account
  4. Share the spreadsheet created in step 1 with the service account created in step 3
  5. Write a Node.js service to access the google sheets created in step 1 using the service account credentials
  6. Test our service written in step 5

Now that we have an outline of what all we are going to do, let's get started

Step 1: Create a spreadsheet on Google Sheets

This one doesn't really need any instructions. You just need to login to your google account, open Google Drive and create a new Google Sheet. You can put some random data in it. One thing that we need to take note of is the sheet's id. When you have the sheet open in your browser, the url will look something like this: https://docs.google.com/spreadsheets/d/1-XXXXXXXXXXXXXXXXXXXSgGTwY/edit#gid=0. And in this url, 1-XXXXXXXXXXXXXXXXXXXSgGTwY is the spreadsheet's id and it will be different for each spreadsheet. Take a note of it because we will need this in our Node.js script to access this spreadsheet. For this tutorial, here is the data we have stored in our spreadsheet:

Screenshot-from-2019-09-21-13-19-40

Step 2: Enable Google Sheets API in our project on Google developers console

We need to enable Google Sheets API for our project in order to be able to use it. This tutorial assumes that you already have a project in Google developers console so if you don't have one, you can create a new one very easily. Once you have the project on Google developers console, open project dashboard. There you should see a button Enable APIs and Services.

Click on it and search for Google sheets API using the search bar. Once you see it, click on it and then click on Enable

Screenshot-from-2019-09-21-12-02-33

Step 3: Create a Service Account

Once you enable Google Sheets API in your project, you will see the page where you can configure the settings for this API. Click on Credentials tab on the left sidebar. Here you will see a list of OAuth client IDs and service accounts. By default there should be none.

Screenshot-from-2019-09-21-12-03-16

Click on Create Credentials button at the top and select Service Account option

Screenshot-from-2019-09-21-12-03-26

Enter the name and description of the service account and click Create button.

Screenshot-from-2019-09-21-12-04-01

Click Continue on the next dialog

Screenshot-from-2019-09-21-12-04-15

On the next dialog, you get an option to create a key. This is an important step. Click on the Create Key button and choose JSON as the format. This will ask you to download the JSON file to your local machine.

For this tutorial, I have renamed the file and saved it as service_account_credentials.json on my local machine.

Keep it somewhere safe. This key file contains the credentials of the service account that we need in our Node.js script to access our spreadsheet from Google Sheets.

Screenshot-from-2019-09-21-12-04-56

Once you've followed all of these steps, you should see the newly created service account on the credentials page

Screenshot-from-2019-09-21-12-05-42

Take a note of the email address of the service account. We will need to share our spreadsheet with this account.

Step 4: Share the spreadsheet created in step 1 with the service account created in step 3

Now that we have a service account, we need to share our spreadsheet with it. It's just like sharing a spreadsheet with any normal user account. Open the spreadsheet in your browser and click on the Share button on top right corner. That will open a modal where you need to enter the email address of the service account. Uncheck the checkbox for Notify people since this will send an email and since service account does not have any mailbox, it will give you a mail delivery failure notification.

Screenshot-from-2019-09-21-13-49-42

Click OK button to share the spreadsheet with the service account.

This completes all the configuration steps. Now we can get to the fun part :-)

Step 5: Write a Node.js service to access the google sheet using the service account credentials

We will create our script as a service that can be used as a part of a bigger project. We will call it googleSheetsService.js. It will expose following APIs:

  • getAuthToken
  • getSpreadSheet
  • getSpreadSheetValues

The function getAuthToken is where we will handle the authentication and it will return a token. Then we will be using that token and pass it on to other methods.

We will not be covering writing data to the spreadsheet but once you get the basic idea of how to use the API, it will be easy to extend the service to add more and more functions supported by the Google Sheets API.

We will be using the googleapis npm module. So, let's get started by creating a directory for this demo project. Let's call it google-sheets-demo.

cd $HOME
mkdir google-sheets-demo
cd google-sheets-demo

Copy the service_account_credentials.json file that we created in step 3 to this directory (google-sheets-demo). And create our new file googleSheetsService.js. Paste the following lines to the file:

// googleSheetsService.js

const { google } = require('googleapis')

const SCOPES = ['https://www.googleapis.com/auth/spreadsheets']

async function getAuthToken() {
  const auth = new google.auth.GoogleAuth({
    scopes: SCOPES
  });
  const authToken = await auth.getClient();
  return authToken;
}

module.exports = {
  getAuthToken,
}

For now our service has only one function that returns the auth token. We will add another function getSpreadSheet soon. First let us see what our function does.

First, we require the googleapis npm module. Then we define SCOPES. When we create an auth token using google APIs, there is a concept of scopes which determines the level of access our client has. For reading and editing spreadsheets, we need access to the scope https://www.googleapis.com/auth/spreadsheets. Similarly, if we only had to give readonly access to spreadsheets, we would have used scope https://www.googleapis.com/auth/spreadsheets.readonly.

Inside the getAuthToken function, we are calling the constructor new google.auth.GoogleAuth passing in the scopes in the arguments object.

This function expects two environment variables to be available, GCLOUD_PROJECT which is the project ID of your Google developer console project and GOOGLE_APPLICATION_CREDENTIALS which denotes the path of the file containing the credentials of the service account.

We will need to set these environment variables from the command line. To get the project ID, you can get it from the url of the project when you open it in your web browser. It should look like this

https://console.cloud.google.com/home/dashboard?project={project ID}

And GOOGLE_APPLICATION_CREDENTIALS must contain the path of the service_account_credentials.json file. So, go to the terminal and from the google-sheets-demo directory, run the following commands to set these environment variables:

export GCLOUD_PROJECT={project ID of your google project}
export GOOGLE_APPLICATION_CREDENTIALS=./service_account_credentials.json

You need to make sure that you have the credentials file copied in the current directory.

Now we will add two more functions to our service:

  • getSpreadSheet
  • getSpreadSheetValues

The first one will return metadata about the spreadsheet while the second one will return the data inside the spreadsheet. Our modified googleSheetsService.js file should look like this:

// googleSheetsService.js

const { google } = require('googleapis');
const sheets = google.sheets('v4');

const SCOPES = ['https://www.googleapis.com/auth/spreadsheets'];

async function getAuthToken() {
  const auth = new google.auth.GoogleAuth({
    scopes: SCOPES
  });
  const authToken = await auth.getClient();
  return authToken;
}

async function getSpreadSheet({spreadsheetId, auth}) {
  const res = await sheets.spreadsheets.get({
    spreadsheetId,
    auth,
  });
  return res;
}

async function getSpreadSheetValues({spreadsheetId, auth, sheetName}) {
  const res = await sheets.spreadsheets.values.get({
    spreadsheetId,
    auth,
    range: sheetName
  });
  return res;
}


module.exports = {
  getAuthToken,
  getSpreadSheet,
  getSpreadSheetValues
}

At the top we have added a line

const sheets = google.sheets('v4');

This is to use the sheets API. Then we have added the two new functions getSpreadSheet and getSpreadSheetValues. To see all the supported API endpoints for Google Sheets API, check this link https://developers.google.com/sheets/api/reference/rest.

For our demo, we are only using two of those. The getSpreadSheet function expects auth token and the spreadsheetId as its parameters. And the getSpreadSheetValues expects one additional parameter that is the sheetName from which to fetch the data. By default, a spreadsheet only contains a single sheet and it is named as Sheet1. Finally we export the newly added functions via module.exports.

This completes our googleSheetsService. If you need to support more API functions, you can check the reference using the link above, add the corresponding wrapper functions in this service and export it using module.exports. For any consumer of this service, they will first need to call the getAuthToken function to get the auth token and then pass on that token to the subsequent functions like getSpreadSheet, getSpreadSheetValues, etc. Now that we have our service ready, we just need to test it to make sure it is working fine

Step 6: Test our service

So we have our service ready. But does it work? Let's check that out.

While typically, we would use a testing framework to run unit tests, to keep this tutorial simple, we are going to write a simple Node.js script. From our project's directory, create a new file called test.js and copy paste the following contents:

const {
  getAuthToken,
  getSpreadSheet,
  getSpreadSheetValues
} = require('./googleSheetsService.js');

const spreadsheetId = process.argv[2];
const sheetName = process.argv[3];

async function testGetSpreadSheet() {
  try {
    const auth = await getAuthToken();
    const response = await getSpreadSheet({
      spreadsheetId,
      auth
    })
    console.log('output for getSpreadSheet', JSON.stringify(response.data, null, 2));
  } catch(error) {
    console.log(error.message, error.stack);
  }
}

async function testGetSpreadSheetValues() {
  try {
    const auth = await getAuthToken();
    const response = await getSpreadSheetValues({
      spreadsheetId,
      sheetName,
      auth
    })
    console.log('output for getSpreadSheetValues', JSON.stringify(response.data, null, 2));
  } catch(error) {
    console.log(error.message, error.stack);
  }
}

function main() {
  testGetSpreadSheet();
  testGetSpreadSheetValues();
}

main()

This file contains two test functions and a main function that is calling those test functions. At the bottom of the file, we are executing the main function. This script expects two command line arguments:

  • spreadsheetId (this is the ID that we got from step 1)
  • sheetName (this is the name of the worksheet for which you want to see the values. When you create a new spreadsheet, it is Sheet1)

Screenshot-from-2019-09-21-20-16-13

Also, ensure that the env variables GCLOUD_PROJECT and GOOGLE_APPLICATION_CREDENTIALS are set properly.

Now, from the terminal, run this script

node test.js <your google sheet's spreadsheet id> <sheet name of the worksheet>

If you have followed all the steps correctly, you should see output like this:

output for getSpreadSheet {
  "spreadsheetId": "1-jG5jSgGTwXXXXXXXXXXXXXXXXXXY",
  "properties": {
    "title": "test-sheet",
    "locale": "en_US",
    "autoRecalc": "ON_CHANGE",
    "timeZone": "Asia/Calcutta",
    "defaultFormat": {
      "backgroundColor": {
        "red": 1,
        "green": 1,
        "blue": 1
      },
      "padding": {
        "top": 2,
        "right": 3,
        "bottom": 2,
        "left": 3
      },
      "verticalAlignment": "BOTTOM",
      "wrapStrategy": "OVERFLOW_CELL",
      "textFormat": {
        "foregroundColor": {},
        "fontFamily": "arial,sans,sans-serif",
        "fontSize": 10,
        "bold": false,
        "italic": false,
        "strikethrough": false,
        "underline": false
      }
    }
  },
  "sheets": [
    {
      "properties": {
        "sheetId": 0,
        "title": "Sheet1",
        "index": 0,
        "sheetType": "GRID",
        "gridProperties": {
          "rowCount": 1000,
          "columnCount": 26
        }
      }
    }
  ],
  "spreadsheetUrl": "https://docs.google.com/spreadsheets/d/1-jG5jSgGTwXXXXXXXXXXXXXXXXXXY/edit"
}
output for getSpreadSheetValues {
  "range": "Sheet1!A1:Z1000",
  "majorDimension": "ROWS",
  "values": [
    [
      "Name",
      "Country",
      "Age"
    ],
    [
      "John",
      "England",
      "30"
    ],
    [
      "Jane",
      "Scotland",
      "23"
    ],
    [
      "Bob",
      "USA",
      "45"
    ],
    [
      "Alice",
      "India",
      "33"
    ]
  ]
}

If you get an error, it means you have not followed all the steps correctly. For this tutorial, the version of googleapis npm module was 43.0.0. You might face issues if you are using older version of the module. Make sure the spreadsheetId and sheetname are correct and the enviroment variables are set properly. If you still get error, you should check the error message and code to see what might be causing the problem.

References

I would definitely recommend checking out these references (especially the Official Google Sheets API reference) to get a more in depth understanding of the sheets API and how to use the Node.js client.

Hope you found this tutorial helpful. Thanks and happy coding :-)

]]>
<![CDATA[Setting up Elasticsearch and Kibana on Docker with X-Pack security enabled]]>

This tutorial assumes that you are familiar with Elasticsearch and Kibana and have some understanding of Docker. Before diving into the objective of this article, I would like to provide a brief introduction about X-Pack and go over some of the latest changes in Elasticsearch version 6.8 which allow

]]>
http://codingfundas.com/setting-up-elasticsearch-6-8-with-kibana-and-x-pack-security-enabled/5cfbbf0368e4965e82c85d45Sat, 27 Jul 2019 11:03:59 GMT

This tutorial assumes that you are familiar with Elasticsearch and Kibana and have some understanding of Docker. Before diving into the objective of this article, I would like to provide a brief introduction about X-Pack and go over some of the latest changes in Elasticsearch version 6.8 which allow us to use the security features of X-Pack for free with the basic license.

X-Pack Security and Elasticsearch 6.8

X-Pack is a set of features that extend the Elastic Stack, that is Elasticsearch, Kibana, Logstash and Beats. This includes features like security, monitoring, machine learning, reporting, etc. In this article, we are mainly concerned with the security features of X-Pack.

X-Pack security makes securing you Elasticsearch cluster very easy and highly customizable. It allows you to setup authentication for your Elasticsearch cluster, create different users with different credentials and different levels of access. It also allows you to create different roles and assign similar users to same role. For example, if you want to grant read-only access to certain users to certain indices of your cluster but want to ensure they cannot write to those indices, you can easily achieve that with X-pack security. And this is just the tip of the iceberg. You can check-out the security API here for a more detailed view of what all you can do with it.

But all these features were not always available for free. Prior to version 6.8, security was not a part of the Basic license. I'll quickly explain what this means. The Elastic Stack has 4 different types of licenses that you can see here

  • Open Source
  • Basic
  • Gold
  • Platinum

Gold and Platinum are paid licenses whereas Open Source and Basic are free. If you visit the above mentioned link, you can see what all features are available under which license. If you see the security dropdown on that page, you can see that some of the security features are available as a part of Basic license. As of writing this article, following security features are availale under Basic license:

Screenshot-from-2019-06-09-12-00-52

Now this list reflects the latest version of Elastic Stack which at the time of writing this article, is version 7.1. Security was made available under basic license from version 6.8 onwards. This is important because it means that if you want to be able to use security features in your Elasticsearch setup for free, you need to have version 6.8 onwards for this.

And that's what we will be using for this article.

Objective

The objective of this article is to setup Elasticsearch and Kibana using Docker Compose with security features enabled. We will be setting up basic authentication on Elasticsearch so that all the API calls will need to include the Bearer token. Also, Kibana UI will require the username and password to login. For our setup, we will be using Docker Compose which makes our entire setup very easy to depoy anywhere and scale. I'll be using Ubuntu 18.04 for this tutorial but the steps will remain more or less same on any other unix based systems and might not be too different on a Windows based system as well. The only piece of code that we will be writing in this article is a docker-compose.yml file. We will be starting with a minimal docker-compose.yml file to get the elasticsearch and kibana setup up and running and then gradually we will tweak it to enable the security features.

Pre requisites

  • Linux based OS
  • Docker and Docker Compose installed
  • Basic understanding of Docker and Docker Compose
  • Knowledge of Elasticsearch and Kibana
  • Experience with Linux command line

If you are using some other operating system, you can follow the instructions specific to that OS but the process remains more or less the same.

Step 1 - Create a basic docker-compose.yml file for Elasticsearch and Kibana

In this step we will create our docker-compose.yml file with two services, elasticsearch and kibana and map their respective ports to the host OS

Let us first start with creating a directory for our project. Open your terminal and type the following

$ cd
$ mkdir elasticsearch-kibana-setup
$ cd elasticsearch-kibana-setup
$ touch docker-compose.yml

Then open the newly created docker-compose.yml file and paste the following lines in it:

version: '3'
services:
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:6.8.0
    ports:
      - 9200:9200

  kibana:
    depends_on:
      - elasticsearch  
    image: docker.elastic.co/kibana/kibana:6.8.0
    ports:
      - 5601:5601

The official docker images for Elastic Stack can be found here

As discussed in the beginning of this article, we will be using version 6.8 for this setup. If you visit the above link and click on Elasticsearch image 6.8 to expand, you'll see two images:

docker pull docker.elastic.co/elasticsearch/elasticsearch:6.8.0   
docker pull docker.elastic.co/elasticsearch/elasticsearch-oss:6.8.0

Screenshot-from-2019-06-09-15-29-52

You can see that one of them has oss tag while the other does not. The difference between these two images is the license. The oss one comes with open source license whereas the non-oss one comes with the basic license. Since the x-pack security features are only available with the basic license, we will be using the non-oss version. Please note that this is also free as explained in the beginning of the article.

Apart from specifying the images, we are mapping the ports of the containers to the ports on the host machine. Elasticsearch runs on port 9200 and Kibana on port 5601 so we are mapping both these ports to the corresponding ports on the host machine. You can map them to some other port as well. The syntax remains the same:

<Host Port>:<Container Port>

So, for instance, if you want to access elasticsearch on port 8080 of your host machine, you'll need to specify the config as:

8080:9200

For now we'll be mapping it to 9200 in this article. Also the depends_on setting in kibana service ensures that it is not started until elasticsearch service is up and running. So, let's try to start our setup with the above settings by running the following command:

$ docker-compose up

This will start pulling the images from docker registry and create the containers. This may take a while depending on whether you already have the images on your machine or not and also depending on your internet speed. After the images have been pulled, you'll start seeing container logs which will take a few more seconds. Once both Elasticsearch and Kibana are ready, you'll see something like this in your console:

elasticsearch_1  | [2019-06-09T10:14:21,167][INFO ][o.e.c.r.a.AllocationService] [pKPbPLz] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.kibana_1][0]] ...]).
kibana_1         | {"type":"log","@timestamp":"2019-06-09T10:14:21Z","tags":["info","migrations"],"pid":1,"message":"Pointing alias .kibana to .kibana_1."}
kibana_1         | {"type":"log","@timestamp":"2019-06-09T10:14:21Z","tags":["info","migrations"],"pid":1,"message":"Finished in 175ms."}
kibana_1         | {"type":"log","@timestamp":"2019-06-09T10:14:21Z","tags":["listening","info"],"pid":1,"message":"Server running at http://0:5601"}
elasticsearch_1  | [2019-06-09T10:14:21,282][INFO ][o.e.c.m.MetaDataIndexTemplateService] [pKPbPLz] adding template [kibana_index_template:.kibana] for index patterns [.kibana]
elasticsearch_1  | [2019-06-09T10:14:21,326][INFO ][o.e.c.m.MetaDataIndexTemplateService] [pKPbPLz] adding template [kibana_index_template:.kibana] for index patterns [.kibana]
elasticsearch_1  | [2019-06-09T10:14:21,343][INFO ][o.e.c.m.MetaDataIndexTemplateService] [pKPbPLz] adding template [kibana_index_template:.kibana] for index patterns [.kibana]
kibana_1         | {"type":"log","@timestamp":"2019-06-09T10:14:22Z","tags":["status","plugin:spaces@6.8.0","info"],"pid":1,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}

Look for the lines where it says status has changed to green. This means that our setup is ready. If you don't see such lines and see any error message it means something went wrong. You'll need to debug the issue and resolve it.

Once the services are up and running, open your browser and open the url http://localhost:9200/ and you will see something like this:

{
  "name" : "pKPbPLz",
  "cluster_name" : "docker-cluster",
  "cluster_uuid" : "AjqbFZ0qRF-X0_TQZqWIZA",
  "version" : {
    "number" : "6.8.0",
    "build_flavor" : "default",
    "build_type" : "docker",
    "build_hash" : "65b6179",
    "build_date" : "2019-05-15T20:06:13.172855Z",
    "build_snapshot" : false,
    "lucene_version" : "7.7.0",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  },
  "tagline" : "You Know, for Search"
}

That's elasticsearch. Also, if you navigate to http://localhost:5601/ you should see the Kibana console. So, with just 13 lines of code in your docker-compose.yml file, you have setup a single node cluster of Elasticsearch and Kibana.

Now although this works, there is a security challenge if you want to deploy it to production. If your server is not a part of a VPC (Virtual Private Cloud) and the ports 9200 and 5601 are accessible to the world, your Elasticsearch and Kibana services can be accessed by anyone. There is no authorization so anyone make any changes to your cluster using the Elasticsearch API directly or through Kibana UI. What if we wanted to keep those ports accessible but require some sort of authentication so that only those who have the right credentials can access our Elasticsearch instance or login to the Kibana UI? Also, what if we want to ensure that certain users should only have limited set of priveleges? For example, we want certain users to be able to search any index in our Elasticsearch cluster but not be able to create any new index or drop any index or change any mapping or write to an index. Or let's say you don't want your Elasticsearch instance directly accessible to the rest of the world but want to keep the Kibana UI accessible and behind authentication and you want different users of Kibana UI to have different access levels. All of this can be achieved with X-Pack security and that's what we will be exploring next.

Go back to the terminal window where you ran the docker-compose up command and press CTRL+C to stop the containers and tear down the setup.

Step 2 - Customize Elasticsearch and Kibana services with environment variables

In order to enable X-Pack security, we will need to customize our elasticsearch and kibana services. Elasticsearch settings can be customized via elasticsearch.yml file and Kibana settings can be customized via kibana.yml file. There are many ways to change this while using docker. We can pass enviroment variables via our docker-compose.yml file. Although this would normally be an ideal way, but the way Elasticsearch and Kibana env variables are passed is not the same and can cause problems in certain deployment environments. You can read more about it here. For this tutorial, we will be creating custom elasticsearch.yml and kibana.yml files and bind mount these to their respective containers, overriding the default files in those container.

This will become more clear in next steps. First, create two files elasticsearch.yml and kibana.yml in the same directory as our docker-compose.yml file:

$ touch elasticsearch.yml
$ touch kibana.yml

Then open elasticsearch.yml and paste the following lines in it:

cluster.name: my-elasticsearch-cluster
network.host: 0.0.0.0
xpack.security.enabled: true

Here we are setting the name of our cluster to my-elasticsearch-cluster. The setting network.host: 0.0.0.0 means that elasticsearch will be accessible from all IP addresses on the host machine if the host machine has more than one network interface. And the last setting is to enable X-Pack security. This ensures that anyone trying to access our Elasticsearch instance must provide the authentication token.

Now open the kibana.yml file and paste the following lines in it:

server.name: kibana
server.host: "0"
elasticsearch.hosts: [ "http://elasticsearch:9200" ]

Here we are setting the server name. The server.host: "0" means that the Kibana instance should be accessible from all the IP addresses on the host machine if the host machine has more than one network interface. And the last setting elasticsearch.hosts includes the list of addresses of the Elasticsearch nodes. Kibana instance can reach out to the Elasticsearch instance by using the address http://elasticsearch:9200. This is achieved by Docker Compose. If you have multiple services in your compose file, containers belonging to one service can reach out to container of other services by using the other service's name. You don't even need to expose the ports for this. So, in our docker-compose.yml file, even if we had not mapped the ports for Elasticsearch, our Kibana instance would still be able to reach out to Elasticsearch instance at http://elasticsearch:9200. However, in that case, we won't be able to connect to our Elasticsearch instance from our host machine. I won't be diving further deep into the details of how networking works in Docker because that will be beyond the scope of this article. But I would definitely suggest you to go through the official docs to get a better understanding.

Ok, so now that we have our config files ready, we need to bind mount them to their respective containers in our docker-compose.yml file. So open the docker-compose.yml file and change it to look like this:

version: '3'
services:
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:6.8.0
    ports:
      - 9200:9200
    volumes:
      - ./elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml

  kibana:
    depends_on:
      - elasticsearch  
    image: docker.elastic.co/kibana/kibana:6.8.0
    ports:
      - 5601:5601
    volumes:
      - ./kibana.yml:/usr/share/kibana/config/kibana.yml

The only changes we have made here is that we have added the volumes section. By using volumes we can map a directory or an individual file from host machine to a directory or a file on the container. Here we are mapping individual files only. The default location of config file in Elasticsearch container is /usr/share/elasticsearch/config/elasticsearch.yml and we are replacing it with the elasticsearch.yml file that we created earlier. Similarly, we are replacing the default kibana.yml file at /usr/share/kibana/config/kibana.yml with our newly created file. With these changes, let's try to start our docker compose setup again by running the command:

$ docker-compose up

This will most likely give you an error. If you see the elasticsearch logs (lines starting with elasticsearch_1 |), you might see some error like this:

elasticsearch_1  | [1]: Transport SSL must be enabled if security is enabled on a [basic] license. Please set [xpack.security.transport.ssl.enabled] to [true] or disable security by setting [xpack.security.enabled] to [false]

This means that Elasticsearch won't start since the initial checks have failed. Consequently, Kibana won't be able to connect to it and you'll see something like this in Kibana logs:

kibana_1         | {"type":"log","@timestamp":"2019-06-11T17:31:14Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}

Press Ctrl+C to stop the containers and tear down the setup because this ain't working and we gotta fix it.

In order to get elasticsearch working, we will need to enable SSL and also install the SSL certificate in our elasticsearch container. I will be walking through the process of creating a new certificate and using that. If you already have a certificate file, you can skip that part. For this, we will need to take a step back and disable x-pack security on our elasticsearch instance so that we can get it up and running and then we will get inside our container shell and generate the certificate.

Step 3 - Create SSL certificate for Elasticsearch and enable SSL

First, we need to disable x-pack security temporarily so that we can get our Elasticsearch container up and running. So, open the elasticsearch.yml file and disable x-pack security by changing the following line:

xpack.security.enabled: false

Then bring up the containers again by running:

$ docker-compose up

This should work fine now and bring up our Elasticearch and Kibana services just like before. Now, we need to generate the certificates and we will be using elasticsearch-certutil utility. For this, we will need to get inside our docker container running elasticsearch service. This is really easy using docker-compose. Think of it like this, we can execute any command inside a docker container by using the command:

$ docker-compose exec <service name> <command>

And if we want to get inside the container's shell, we essentially want to execute the bash command on our container. So, our command becomes:

$ docker-compose exec elasticsearch bash

Here elasticsearch is our service and bash is our command. We need to do this while the container is running so open another terminal window and paste the above command (make sure to run this command from the same directory where your docker-compose.yml file is located)

Once you're inside the container, your shell prompt should look something like this:

[root@c9f915e86309 elasticsearch]#

Now run the following command here:

[root@c9f915e86309 elasticsearch]# bin/elasticsearch-certutil ca

This will generate some warnings describe what it is going to do. I recommend you read that. And it will prompt you for file name and password. Just press ENTER for both to proceed:

Please enter the desired output file [elastic-stack-ca.p12]: 
Enter password for elastic-stack-ca.p12 : 

This will create a file elastic-stack-ca.p12 in the directory from which you ran the above command. You can check by running the ls command. This is the certificate authority we will be using to create the certificate. Now, run the command:

[root@c9f915e86309 elasticsearch]# bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12

This will again raise some warnings and describe what it is going to do. I recommend you read that too. And it will prompt you for password and file name. Press ENTER at all the steps to proceed:

Enter password for CA (elastic-stack-ca.p12) : 
Please enter the desired output file [elastic-certificates.p12]: 
Enter password for elastic-certificates.p12 : 

This will create the elastic-certificates.p12 which is what we need. We need this file outside the container on the host machine because it will be vanish once we destroy our container. This file is in PKCS12 format which includes both the certificate as well as the private key. In order to copy this file outside the container to host machine, press CTRL+D to first exit the container

And then run the following command on your host machine (from the same directory where docker-compose.yml file is present)

$ docker cp "$(docker-compose ps -q elasticsearch)":/usr/share/elasticsearch/elastic-certificates.p12 .

The above command might seem a bit tricky to some so I will add a bit of explanation here. For those of you who understand how it works can proceed to Step 4.

Let us first see what docker-compose ps does. If you run the command you'll see output like this:

                      Name                                    Command               State                Ports              
----------------------------------------------------------------------------------------------------------------------------
elasticsearch-kibana-setup_elasticsearch_1   /usr/local/bin/docker-entr ...   Up      0.0.0.0:9200->9200/tcp, 9300/tcp
elasticsearch-kibana-setup_kibana_1          /usr/local/bin/kibana-docker     Up      0.0.0.0:5601->5601/tcp          

This shows all the docker containers running or stopped which are being managed by our docker-compose.yml file.

If you check the help for this command:

$ docker-compose ps --help

You will see output like this:

List containers.

Usage: ps [options] [SERVICE...]

Options:
    -q, --quiet          Only display IDs
    --services           Display services
    --filter KEY=VAL     Filter services by a property

You can see that by using -q flag, we can get just the id of the container. And also you can see that by providing the service name, we can limit the output to just the service we are interested in. So, if we want to get the id of the elasticsearch container, we need to run the command:

$ docker-compose ps -q elasticsearch

This should get you the id of the elasticsearch container.

Now, if we go back to our docker cp command above, you can check the syntax of that command by using help again:

$ docker cp --help

This should display the help:


Usage:	docker cp [OPTIONS] CONTAINER:SRC_PATH DEST_PATH|-
	docker cp [OPTIONS] SRC_PATH|- CONTAINER:DEST_PATH

Copy files/folders between a container and the local filesystem

Options:
  -a, --archive       Archive mode (copy all uid/gid information)
  -L, --follow-link   Always follow symbol link in SRC_PATH

You can see that we need to specify the command as:

docker cp <container id>:<src path> <dest path on host>

Our source path in this case is /usr/share/elasticsearch/elastic-certificates.p12 on the elasticsearch container. And we are getting the id of the elasticsearch container by using the docker-compose ps -q elasticsearch command. And we need to copy the file to the current directory on host so our destination path is .. Hence the command becomes:

$ docker cp "$(docker-compose ps -q elasticsearch)":/usr/share/elasticsearch/elastic-certificates.p12 .

We will also copy the CA file by running the command:

$ docker cp "$(docker-compose ps -q elasticsearch)":/usr/share/elasticsearch/elastic-stack-ca.p12 .

Now that we have our certificate file on our host machine, we will be bind mounting it to our container just like the way we did for elasticsearch.yml file. So, if you already have an SSL certificate you can use that too in place of this one.

Step 4 - Installing the SSL certificate on Elasticsearch and enabling TLS in config

Now that we have the SSL certificate available, we can enable the x-pack security on our elasticsearch node and also enable TLS. First, we need to bind mount our certificate from host machine to container. First, go back to the terminal where you ran docker-compose up command and press CTRL+C to stop the containers. Then open the docker-compose.yml file and change it so that it look like this:

version: '3'
services:
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:6.8.0
    ports:
      - 9200:9200
    volumes:
      - ./elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
      - ./elastic-certificates.p12:/usr/share/elasticsearch/config/elastic-certificates.p12

  kibana:
    depends_on:
      - elasticsearch
    image: docker.elastic.co/kibana/kibana:6.8.0
    ports:
      - 5601:5601
    volumes:
      - ./kibana.yml:/usr/share/kibana/config/kibana.yml

Now, open your elasticsearch.yml file and change it to this:

cluster.name: my-elasticsearch-cluster
network.host: 0.0.0.0
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.keystore.type: PKCS12
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.type: PKCS12

First 3 lines are same as before (we have changed xpack.security.enabled to true again). Rest of the lines denote the SSL settings and location of our certificate and private key, which is the same. You can check out all the security settings here.

Once this is done, go back to terminal and bring up the container again

$ docker-compose up

So, what do you see? Still not working eh? This is because now Kibana is not able to connect to our Elasticsearch instance because now we have security enabled but haven't configured the credentials on Kibana. So, you'll see continous logs like this:

kibana_1         | {"type":"log","@timestamp":"2019-06-11T19:03:35Z","tags":["warning","task_manager"],"pid":1,"message":"PollError [security_exception] missing authentication token for REST request [/_template/.kibana_task_manager?include_type_name=true&filter_path=*.version], with { header={ WWW-Authenticate=\"Basic realm=\\\"security\\\" charset=\\\"UTF-8\\\"\" } }"}

Also, if you open your web browser and go to http://localhost:9200 you will see a prompt for username and password. And if you press ESC, you get this error:

{
  "error": {
    "root_cause": [
      {
        "type": "security_exception",
        "reason": "missing authentication token for REST request [/]",
        "header": {
          "WWW-Authenticate": "Basic realm=\"security\" charset=\"UTF-8\""
        }
      }
    ],
    "type": "security_exception",
    "reason": "missing authentication token for REST request [/]",
    "header": {
      "WWW-Authenticate": "Basic realm=\"security\" charset=\"UTF-8\""
    }
  },
  "status": 401
}

And if you try visiting http://localhost:5601 you will get the error:

Kibana server is not ready yet

So, we have solved one part of the problem. We have secured our Elasticsearch instance and nobody can access it without providing the correct credentials. But we don't know what the correct credentials are. We will be setting those up in the next step and configuring Kibana to use those. For now, keep the docker-compose up command running since we need to go inside the Elasticsearch container again

Step 5 - Generate default passwords and configure the credentials in Kibana

Before we generate the passwords for built-in accounts of Elastic stack, we will first need to change our docker-compose.yml file to bind mount the data volume of Elasticsearch. Up until now, the storage of our containers has been temporary. It means that once we destroy the containers, all the data inside of them gets destroyed as well. So if you created any indices, users, etc in Elasticsearch, they will no longer persist once you do docker-compose down to bring down the services. That's not something we would want in production. We want to ensure that the data changes persist between container restarts. For that, we will need to bind mount the data directory from elasticsearch container to a directory on the host machine.

First, bring down all the running containers by executing the following command:

docker-compose down

Then create a directory called docker-data-volumes in the same directory where your docker-compose.yml file is located. You can give it any other name but for this tutorial we will call it docker-data-volumes. Inside that directory, create another directory called elasticsearch

mkdir docker-data-volumes
mkdir docker-data-volumes/elasticsearch

Now under the volumes section of elasticsearch service in your docker-compose.yml file, add the following line:

      - ./docker-data-volumes/elasticsearch:/usr/share/elasticsearch/data

As explained earlier, when we need to bind mount a file or directory from host machine to container, we specify the <host path>:<container path>. The default path for data inside an elasticsearch container is /usr/share/elasticsearch/data and we are binding it to the directory ./docker-data-volumes/elasticsearch on host machine. So your docker-compose.yml file should now look like this:

version: '3'
services:
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:6.8.0
    ports:
      - 9200:9200
    volumes:
      - ./elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
      - ./elastic-certificates.p12:/usr/share/elasticsearch/config/elastic-certificates.p12
      - ./docker-data-volumes/elasticsearch:/usr/share/elasticsearch/data

  kibana:
    depends_on:
      - elasticsearch
    image: docker.elastic.co/kibana/kibana:6.8.0
    ports:
      - 5601:5601
    volumes:
      - ./kibana.yml:/usr/share/kibana/config/kibana.yml
 

Bring up the containers by running

docker-compose up

While docker-compose up is running in one terminal, open another terminal to get inside the elasticsearch container by running the command:

$ docker-compose exec elasticsearch bash

Then run the following command to generate passwords for all the built-in users:

[root@c9f915e86309 elasticsearch]# bin/elasticsearch-setup-passwords auto

Note them down and keep them somewhere safe. Exit the container by pressing CTRL+D

Now open the kibana.yml file and change it to this:

server.name: kibana
server.host: "0"
elasticsearch.hosts: [ "http://elasticsearch:9200" ]
elasticsearch.username: kibana
elasticsearch.password: <kibana password>

You need to put the password for kibana user here in the elasticsearch.password setting.

Go to the terminal where docker-compose up was running and press CTRL+C to bring the containers down. And then run the command again

$ docker-compose up

This should bring up both the services, elasticsearch and kibana. Now, if you open your browser and visit http://localhost:9200 it will again prompt you for username and password. Here, enter the username as elastic and enter the password for the user elastic that you got earlier. You should be able to see the output like this on successful authentication:

{
  "name" : "1mG1JlU",
  "cluster_name" : "my-elasticsearch-cluster",
  "cluster_uuid" : "-mEbLeYVRb-XqA24yq6D1w",
  "version" : {
    "number" : "6.8.0",
    "build_flavor" : "default",
    "build_type" : "docker",
    "build_hash" : "65b6179",
    "build_date" : "2019-05-15T20:06:13.172855Z",
    "build_snapshot" : false,
    "lucene_version" : "7.7.0",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  },
  "tagline" : "You Know, for Search"
}

Also, if you open http://localhost:5601 you will see the Kibana console but now it will ask for username and password:

Screenshot-from-2019-06-12-01-00-10

Here also enter the username as elastic and password for the same. You will see the Kibana console on successful authentication.

If you followed all the steps correctly till now, you should be able to login to Kibana console now.

Now if you click on the Management tab in the sidebar, you will see Security section on the right hand side panel.

Screenshot-from-2019-07-27-16-15-10

You can see the users, roles, create new role or create new user over here. There is a lot you can do here and I would recommend you play with it for a while to get a feel of it.

Conclusion

This completes our tutorial and our setup of an Elasticsearch and Kibana cluster with basic license (Free) using docker-compose, with X-Pack security enabled. I hope you found it helpful and if you have any suggestions or find any errors, feel free to comment below.

Happy Coding :-)

]]>
<![CDATA[How to fix the Strava crashing issue on OnePlus 3T]]>http://codingfundas.com/how-to-fix-the-strava-crashing-issue-on-oneplus-3t/5b7eae78f2c5f22fcce511cfThu, 23 Aug 2018 18:34:24 GMT

This is not a programming related post but something I felt worth sharing for runners/cyclists like me who use the mobile app Strava to track their activity. For the past few months, I have been constantly facing an issue with this app on my Oneplus 3T. Everytime I tried to record an activity, the app used to crash after a few minutes and then after it resumed, it completely messed up the tracking information. This is the error I was getting:

d7ba5aa3-adcc-4f09-8b2c-cb7d51c0db38

On the map, it used to draw a straight line from the point where it crashed to the point where it resumed. See the straight line in the screenshot below:

08ad9730-7918-4436-bb93-52f777229c86

It can be frustrating at times especially when you are running in loops. Imagine yourself running around a park with 1 km perimeter and let's say you ran 3 laps from the time the app crashed and it recovered but the straight line distance between the two points is say 100 meters, then instead of showing a distance of 3km, Strava shows that you only ran 100m.

2gf2a4

Most of the generic solutions posted online did not work for me. I did eventually find the solution specific to Oneplus phones but had to search through the forums for that. So I felt, sharing a step by step guide on how to solve the problem with screenshots might be helpful. I am sharing what worked for me and should most likely work for other Oneplus users but I can't guarantee that it will.

Solution: short version

I will just mention the steps that need to be performed and then I will elaborate on each step with a screenshot.

  • Go to Settings > Location > Mode. Select the mode High accuracy
  • Then go to Settings > Apps > Application List. Find the Strava app and click on it.
  • Click on Battery. Make sure the Background activity is selected.
  • Then click on the Battery optimization. Scroll down to find Strava app in that list. Click on it and select Don't optimize.
  • Now go out of settings. Start Strava app to track an activity.
  • Click the button on the right side of the home button to bring up recent apps. Find Strava in that list but don't click on it. You should see a small lock icon on the top right corner of Strava app. Make sure that lock icon is of a closed lock. If it shows an open lock, tap on it and it will be closed. This ensures that the OS won't kill the app when you try to close all recent apps

Now start tracking your activities. This should most likely resolve the crashing app issue for OnePlus phones. If you had trouble understanding the instructions above, see below for a more detailed version with screenshots

Solution: detailed version

Step 1: Set the location mode to high accuracy

Go to Settings > Location > Mode and select High accuracy

WhatsApp-Image-2018-08-23-at-11.03.08-PM

Step 2: Go to Strava application settings

Go to Settings > Apps > Application List. Find the Strava app and click on it.

059c8774-e3ad-412e-a3ca-d61081f3b8b1-1

311b92ed-8f36-40e8-b93d-3953b84fa55a

72b6f9f8-1b71-4954-a308-5e11b5899dfd

Step 3: Select "Background activity" in app's battery settings

Click on Battery

20d406d2-17a3-4c53-b006-9d037565a672

Make sure the Background activity is selected.

da952412-2b4d-453b-a6f4-a5bf3a081f32_1

Step 4: Disable battery optimization for Strava

In the same view, click on the Battery optimization.

da952412-2b4d-453b-a6f4-a5bf3a081f32

Scroll down to find Strava app in that list. Click on it and select Don't optimize.

3471e136-24a9-4edc-98a1-70eee4fb36aa

Step 5: Lock the Strava app to avoid closing

In Oneplus 3T, there are 3 buttons. Home button (middle), Back button (left) and Recents button (right). The Recents button shows the list of all the apps that are running.

Start Strava app. Then click on the Recents button and scroll to find the Strava in the list of apps shown. On the top right corner of the app, you will see the lock symbol. Make sure it looks like a closed lock like the screenshot below:

2fe55153-ddd4-4762-ad36-9f281895c1fc

If it shows an open lock, tap on the lock icon to lock the app. This setting prevents the app from closing when you tap the option to close all apps.

Conclusion

These steps solved the problem for me, at least for now. I will keep testing the app to see if it crashes again in the same manner and will update this article accordingly. But for now, this solution works perfectly for me. Hope this helps you . Happy running :-)

]]>
<![CDATA[How to SSH to AWS servers using an SSH config file?]]>http://codingfundas.com/ssh-to-aws-servers-using-an-ssh-config-file/5b1ce862fb334d4343e7ae14Sun, 10 Jun 2018 19:01:31 GMT

How do you usually SSH to an AWS (Amazon Web Services) EC2 instance? If your answer is:

ssh -i <your pem file> <username>@<ip address of server>

Then you should read this tutorial.

What is an SSH config file and why should I even bother to know?

The above mentioned method for connecting to an AWS EC2 instance or any remote server is absolutely correct. There is nothing wrong with it and it is a highly secure way of connecting to a remote server. But imagine yourself having to connect to 15 different servers almost every day (15 different IP addresses to remember) and each of them having a different private key file (the pem file in above example). Let's say in some of the servers you need to conect as user ubuntu and some of the servers you need to connect as user ec2-user, etc. Also, let us say you want some port forwarding (more on this later) in some of those connections. Remembering all these configs for even a handful of servers can be a pain and it becomes a mess to handle everything with the above mentioned method. Do you see the ugliness of it, the disarray? Would it not be much easier if you could just write the command:

$ ssh dev-server

Or

$ ssh production-server

Imagine executing this command from any directory, without bothering to remember the location of your pem files (private keys), the username with which you want to connect and the IP address of the server. This would make life so much better. That's exactly what an SSH config file is meant for. As its name suggests, it's a file where you provide all sorts of configuration options like the server IP address, location of the private key file, username, port forwarding, etc. And here you provide an easy to remember name for the servers like dev-server or production-server, etc.

Now, do you see the beauty of it? The possibilities, the wonder? Well, if you do and you wish to learn how to explore these possibilities, then read on.

What we will do in this tutorial?

We will quickly go through a brief introduction of SSH and the concept of private and public keys. Then we will see how to SSH to an AWS instance without using a config file. Then we will learn how to connect to the same instance using an SSH config file instead. So, this brings us to our first question

What is SSH?

SSH stands for secure shell. Wikipedia defintion says:

Secure Shell (SSH) is a cryptographic network protocol for operating network services securely over an unsecured network

In very simple terms, it is a secure way of logging in to a remote server. It gives you a terminal to the remote server where you can execute the shell commands.

How does it work?

When you wish to connect to a remote server using SSH from your local machine, your local machine here is a client and the remote server is a server. The client machine needs a process called ssh client whose task is to initiate ssh connection requests and participate in the process of establishing the connection with the server. And the remote server needs to run a process called ssh server whose task is to listen for ssh connection requests, authenticate those and provide access to the remote server shell based on successful authentication. We provide the server ip address, username with which we wish to login, password or private key to the ssh client when we wish to connect to a remote server.

What are public and private keys?

Typically, when we connect to a remote server via SSH, we do it using a public-private key based authentication. Public and private keys are basically base64 encoded strings which are stored in files. They are generated in pairs. Think of them as two different keys which are needed together to open a lock. And think of the process of establishing an SSH connection as a process of opening a lock. This process requires two keys of the same pair, a private key and its corresponding public key. We keep our private key file on our local machine and the server needs to store our public key.

Let us say we wish to login to a hypothetical remote server with IP address 54.0.0.121 with username john.

We give our ssh_client the address of the server (54.0.0.121), the username with which we wish to login (john) and the private key file to use. The ssh client goes to the ssh server using the address that we gave him and asks ssh server to bring the public key for user john to open the lock (authenticate the user john and provide him access to the remote server via SSH).

The ssh server checks the list of public keys he has and brings the public key for john. Both ssh client and ssh server then insert their respective public and private keys into the common lock. If the keys belong to the same pair, the lock will be opened and connection established. If the ssh server does not have the public key for user john then the lock does not open and authentication fails.

The above analogy is an oversimplification. The actual process is somewhat more complex. If you wish to understand the details of how it actually works, I would recommend this article on DigitalOcean.

How does one usually SSH to an AWS EC2 instance

When we create a new AWS EC2 instance for example using an Amazon Linux AMI or an Ubuntu Server AMI, at the last step we are asked a question about creating a new key pair or choosing an existing key pair. If you do it for the first time, you will need to create a new key pair. You provide a name for the key pair on this step. Let's say you provide the name as MyKeyPair at this step.
Before being able to proceed, you need to click the button Download Key Pair. This generates a pair of public and private key and lets you download the private key as MyKeyPair.pem file. After this when you click the Launch Instance button, AWS automatically adds the public key of the key pair to the newly created EC2 instance. Public keys are located in the ~/.ssh/authorized_keys file. So, if you choose Amazon Linux AMI while creating the EC2 instance, it will be added in the /home/ec2-user/.ssh/authorized_keys file. Similarly, if you use Ubuntu Linux AMI while creating a new EC2 instance, the public key will be added to the /home/ubuntu/.ssh/authorized_keys file. First thing you need to do is to change the permissions of your private key file (MyKeyPair.pem). Navigate to the directory where your private key file is located. And then run the following command:

$ chmod 400 MyKeyPair.pem

This gives read-only access to your private key file to only you. Other than yourself, nobody else can read or write this file. Now, in order to SSH to an EC2 instance, we would execute the following command:

$ ssh -i <path to MyKeyPair.pem> <username>@<ip address of the server>

So, for example if the IP address of the server was say 54.0.0.121 and we chose Ubuntu Linux while creating the EC2 instance, then the username will be ubuntu. And our command becomes

$ ssh -i MyKeyPair.pem ubuntu@54.0.0.121

This is assuming we are running this command from the directory containing our MyKeyPair.pem file. If we are executing this command from some other directory then we will need to provide the correct path of the MyKeyPair.pem file. Similarly, if we used Amazon Linux AMI while creating the EC2 instance, then username in that case becomes ec2-user.

So, this explains how AWS generates public-private key pairs when you create an EC2 instance and how you can use the private key to connect to an EC2 instance. Next we will learn how to do the same using an SSH config file.

How to use an SSH config file

We have already discussed what an SSH config file is. Now we will create one and use that to connect to our EC2 instance that we connected earlier. SSH file needs to be in the ~/.ssh directory of the client machine. In our case, this will be our local machine. So, go to the ~/.ssh directory (create it if it does not exist) and then create a file with name config. Open the file and add the following contents to it:

Host <an easy to remember name for the server>
  HostName <IP address of the server>
  IdentityFile <full path of the private Key file>
  User <username>

Replace the values in <> with actual values in your case. For example, if we used Ubuntu Linux AMI and the IP address of the server is 54.0.0.121 and the private key file (MyKeyPair.pem file) is located in /home/mandeep/private_keys directory, then the content of the config file becomes:

Host my-server
  HostName 54.0.0.121
  IdentityFile /home/mandeep/private_keys/MyKeyPair.pem
  User ubuntu

Let us see what each of these lines mean:

  • Host: Here you need to provide any easy to remember name for the server. This is only for your reference
  • Hostname: This is the fully qualified domain name or IP address of the server. In our example we have included IP address but it can also be fully qualified domain name like api.example.com_.
  • IdentityFile: Absolute path of the private key file
  • User: username of the user logging in. This user must exist on the server and have the public key in the ~/.ssh/authorized_keys file.

Once you save this file, you can easily connect to your EC2 instance by running the following command in the terminal:

$ ssh my-server

Here, it does not matter from which directory you execute this command. You can add as many configurations as you want in your config file. For example, if you wish to connect to another server with IP address say 54.1.1.91 and private key as MySecondKey.pem and username as ec2-user then your config file should look like this:

Host my-server
  HostName 54.0.0.121
  IdentityFile /home/mandeep/private_keys/MyKeyPair.pem
  User ubuntu
Host my-second-server
  HostName 54.1.1.91
  IdentityFile /home/mandeep/private_keys/MySecondKey.pem
  User ec2-user

Now you can connect to the my-second-server by running the command:

$ ssh my-second-server

That's it. That's how you create an SSH config file. Easy, isn't it? And once you start using it, it's hard to imagine living without it. It makes life so much better.

So, we know how to SSH to a remote server using config files. What next?

Well, there are plenty of configuration options one can provide in a config file and discussing all of them is beyond the scope of this tutorial. You can refer to the documentation here for the complete list of options, but I will be discussing the two options that I usually find quite handy:

  • LocalForward
  • ForwardAgent

LocalForward or Local Port Forwarding

Let us discuss this with an example. Consider a scenario that you have a remote server with the domain name redis.mydomain.com. And let us say we are running some process on this server which is not accessible publicly. For example, let us say we are running a redis server on this remote server on port 6379 but it can only be accessed once you login to the remote server and not from outside. Now let's say our requirement is that we need to access this remote redis server in a script running on our local machine. How do we do this?

SSH tunneling allows us to map a port from our local machine to a ip address:port on the remote server. For example, we can map the port 6389 on our local machine to the address localhost:6379 on the remote server. After doing this, our local machine thinks that the redis server (which is actually running on remote server on localhost:6379) is running on our local machine on port 6389. So, when you hit localhost:6389 on your local machine, you are actually hitting the redis server running on the remote server on port 6379.

How do we do this using our SSH config file?

We just need to add an additional property of LocalForward. Here is an example:

Host Redis-Server
  Hostname redis.mydomain.com
  IdentityFile /home/mandeep/private_keys/RedisServerKey.pem
  Localforward 6389 localhost:6379
  User ubuntu

This approach comes quite handy when you want to access a server which is a part of a VPC (Virtual Private Cloud) and not accessible publicly. For example, an Elasticache instance, an RDS instance, etc.

ForwardAgent

This property allows your SSH session to acquire the credentials of your local machine. Consider a scenario where you have a private Git repository on Github. You can access the repository either via HTTPS using username and password or by using SSH using private key. Username and password approach is less secure and not recommended. For accessing your repo via SSH, what we typically do is we create a private key public key pair which are stored in ~/.ssh directory as id_rsa (private key) and id_rsa.pub (publick key) files. Once we add our public key (id_rsa.pub) to our Github account, we can access our repository via SSH. This works well with our local machine. Now consider a scenario where you need to SSH to a remote server and access the Git repository from that remote server. You have two options here. One is to copy your private key (id_rsa) file and put it in the ~/.ssh directory on the remote server. This is a bad approach since you are not supposed to share your private key file. Another approach would be to generate a new key pair on the server and add the public key of that pair to the Github repo. There is a problem with both the approaches. Anyone who can SSH to the remote server will be able to access the Git repository. Let's say we don't want that. Let's say we only want the developers who have access to the repo through their own private key files should be able to access the repo. Anybody else who does not have access to the repo but can SSH to the remote server should not be able to access the repo from there. This is where the ForwardAgent property comes quite handy. You can add this to your config file as shown below:

Host App-Server
  Hostname app.mydomain.com
  IdentityFile /home/mandeep/private_keys/AppServerKey.pem
  User ubuntu
  ForwardAgent yes   

After adding this property to your config file, when you SSH to the server using the following command:

$ ssh App-Server

Then the SSH terminal that gets opened acquires the credentials (id_rsa) file from your local machine. Now, even if there is no ~/.ssh/id_rsa file on the remote server, any Git repository that you can access on your local machine, you can also access that repository from the remote server.

Conclusion

With this tutorial we learned the importance of an SSH config file and saw how it can make our lives easier. If you found this tutorial helpful and believe that it can help others, please share it on social media using the social media sharing buttons below. If you like my tutorials and my writing style, follow me on twitter. If you feel I have made any mistakes or any information in this article is incorrect, feel free to mention those in the comments below. Thanks! Happy coding :-)

]]>
<![CDATA[Hosting a Ghost blog on AWS S3 as a static website]]>http://codingfundas.com/ghost-blog-on-aws-s3-as-a-static-website/5b1255e05b8520f196f6bcecSun, 03 Jun 2018 19:33:30 GMT

Before diving into what this article is all about, I would like to provide a summary about what static website hosting on S3 means and what are its benefits.

What is static website hosting on S3?

Amazon Web Services (AWS) provides a very economical file storage service called Simple Storage Service (S3 in short). S3 is extremely cheap compared disks, is highly available and often used for file storage. Apart from using S3 for simple file storage, AWS also allows you to host your website on S3. So, if you have a static website, hosting it on S3 is a very good option due to the following reasons:

  • Cheap: Serving your wesbite from S3 is much much cheaper as compared to serving it from a server. You can handle millions of users with just around $2 a month.
  • Scalable: Since there are no servers involved, you don't need to bother about scaling as your traffic increases.
  • Reliable: No servers mean no worries of downtime. S3 buckets will be always available

If your website does not include any server side code, there is no reason to host it on a traditional server. So, when I decided to start my own blog using Ghost, I explored the possibility of hosting it on S3 as a static wesbite. And in this article, I will be sharing my experience doing the same and showing you how you can do that with your own blog.

Objective of this article

The purpose of this article is to demonstrate how to setup your own blog using Ghost and then show how to host your blog on AWS S3 as a static website. We'll also cover how you can link your domain from Godaddy with your S3 bucket using Route 53. Here are the steps we will follow:

  • Installing Node.js
  • Setting up Ghost on your local machine
  • Creating content for your blog
  • Generating assets for static website from your locally hosted ghost blog using HTTrack
  • Setting up AWS CLI
  • Creating and configuring S3 buckets for your domain
  • Deploying your static website assets (generated by HTTrack) to your S3 bucket
  • Pointing your Godaddy domain to your S3 bucket using Route 53

Pre Requisites

  • You have an AWS account
  • You already have a domain in Godaddy
  • You are using a Linux or Unix based system

If you are using some other operating system, you can follow the instructions specific to that OS but the process remains more or less the same. Similarly, if your domain is registered with some other domain registrar other than Godaddy, you will need to follow the instructions for that on how to map your domain to your S3 buckets. You can still follow this article even if you do not have a domain yet. However, your blog links will look something like this:

http://yourdomain.com.s3-website.ap-south-1.amazonaws.com

Instead of this:

http://yourdomain.com

But you can still use S3 to host your blog even without buying a domain of your own. Also, I am using Ghost for my blog because I liked it most due to its simplicity and amazing markdown editor, but you can use the concepts of this tutorial and apply them to your blog even if you are using some other blogging engine.

So, without further ado, let us dive in

Step 1 — Installing Node.js

Head over to the official Node.js website and download and install the Node.js version for your OS. I would recommend going with the LTS version. If you are using Linux, you can also follow the instructions for your distribution of Linux and install the same using package manager here

Once installed, verify that by opening your terminal and typing the following command:

$ node -v

If it displays the version of Node.js, then Node was successfully installed on your machine. If it shows an error message then you probably made some mistake during installation. So, you will need to follow the installation instructions again to make sure Node is installed on your machine.

Step 2 — Setting up Ghost on your local machine

We will download Ghost for developers on our local machine and use it to host our Ghost blog first on our localhost.

By default, the Ghost installation will contain some demo posts. We will then explore the admin panel of Ghost and remove the demo posts and add some new blog posts of our own.

Then we will see how our blog looks on our localhost.

First, let us create a directory on our local machine where we will keep all the files for our blog. Open your terminal and go to your home directory. Then create a new directory called my-awesome-blog. You need to run the following commands for that:

$ cd
$ mkdir my-awesome-blog
$ cd my-awesome-blog

Now we need to download Ghost for developers and copy it to our my-awesome-blog directory. Head over to Ghost developer's page here and download the zip file to your local machine. Copy the zip file to your my-awesome-blog directory and unzip it. Now the contents of your my-awesome-blog directory should look like this:

➜  my-awesome-blog ls -lhrt
total 552
-rw-r--r--  1 mandeep  staff   209K May 29 15:13 yarn.lock
-rw-r--r--  1 mandeep  staff   1.4K May 29 15:13 index.js
-rw-r--r--  1 mandeep  staff   3.9K May 29 15:13 README.md
-rw-r--r--  1 mandeep  staff   3.1K May 29 15:13 PRIVACY.md
-rw-r--r--  1 mandeep  staff   451B May 29 15:13 MigratorConfig.js
-rw-r--r--  1 mandeep  staff   1.0K May 29 15:13 LICENSE
-rw-r--r--  1 mandeep  staff    32K May 29 15:13 Gruntfile.js
-rw-r--r--  1 mandeep  staff   4.1K May 29 15:14 package.json
drwxr-xr-x  5 mandeep  staff   170B May 29 15:16 core
drwxr-xr-x  9 mandeep  staff   306B May 29 15:16 content

This is the Ghost bundle for developers that you can use to host Ghost blog on your local machine and see how your blog will look like. In order to quickly see how the blog looks by default, run the following command from inside the my-awesome-blog directory:

$ npm install --production

Ghost is built in Node.js. Typically, when you download any Node.js project, you need to install its dependencies. The above npm install command does that. It will install the project dependencies specific to your operating system version. Next we need to initialize the database for our blog. Run the following command in terminal (note that it is npx and not npm):

$ npx knex-migrator init

This will create the ghost-dev.db file in my-awesome-blog/content/data directory. This file contains the database for our blog and initally it contains some demo blog posts by ghost.org

Note: npx is a very cool utility. We don't really need to know anything about it for this tutorial but you can definitely read more about it here https://github.com/zkat/npx

Once we have installed all the dependencies and initialized the database, let's try starting your Ghost server by running the following command:

$ npm start

You should see ouput like this in your terminal:

➜  my-awesome-blog npm start

> ghost@1.23.1 start /Users/mandeep/my-awesome-blog
> node index

[2018-06-02 10:36:16] WARN Theme's file locales/en.json not found.
[2018-06-02 10:36:16] INFO Ghost is running in development...
[2018-06-02 10:36:16] INFO Listening on: 127.0.0.1:2368
[2018-06-02 10:36:16] INFO Url configured as: http://codingfundas.com/
[2018-06-02 10:36:16] INFO Ctrl+C to shut down
[2018-06-02 10:36:16] INFO Ghost boot 1.517s

Please note the third last line in the above output which says that the Url is configured as http://codingfundas.com/

This means that your Ghost server is running on your local machine and serving the blog at the above mentioned address. Leave the server running in your terminal. Open your browser and copy paste that url in the address bar, you should see your blog. It will look something like this:

Screenshot-from-2018-06-03-13-17-00

By default, your blog will contain demo posts by Ghost, each post being a tutorial on how to use various functionalities in Ghost. I would recommend that you go through these tutorials and get familiar with Ghost. In our next step, we will be deleting all of these posts and writing our own new posts using Ghost's admin panel.

Step 3 — Creating content for your blog

As we saw in step 2, our ghost blog contains demo posts by default. Now we will see how to manage posts on our blog by using the Ghost admin panel.

Open your browser and open the url http://codingfundas.com/ghost/

This will open the Ghost admin panel. Since this is the first time we are accessing the admin panel, it will open the wizard for creating a new admin user. Here's how it looks like:

Screenshot-from-2018-06-03-13-20-27

Go ahead and follow the wizard to create a new account and set the password for the same. For now, we do not need to invite anybody so you can skip the last step where it asks you to invite your team. Once you have created the admin account, you will see the admin panel which will look something like this:

Screenshot-from-2018-06-03-13-24-31

You can see all the posts here. In Ghost's terminology, a blog post is called a story. So, in the left side navigation pane, you can see the stories tab and if you click on that you will see all the stories on the right hand side panel. If you click on the title of any of the stories, it will open that story for editing. Feel free to play around with the admin panel. Go edit some stories, save your changes, publish your changes (Update/Publish button at top right corner), play with settings of your story (gear icon on top right corner). Get familiar with the admin panel. Once you are confident with the admin panel, let's go ahead and delete all the existing blog posts and create some new posts.

In order to delete the old blog posts, you can open each of the stories, click on the gear icon on the top right corner, scroll to the buttom and click on Delete Post button. Do this for all the posts. Here is a screenshot for reference (look at the bottom right corner):

Screenshot-from-2018-06-03-13-26-33

Once you have deleted all the posts, click on the New story button in the left side navigation panel. This will start a new blog post with markdown editor on the right hand side panel. If you are familiar with Markdown syntax, go ahead and play with the editor. Get familiar with the editor. There are lots of cool stuff you can do in it. Then tweak the settings of the post by clicking the gear icon on the top right corner. Once you have created your blog post, click on Publish button at the top right corner and publish your post. Once published, you can view your blog by clicking on the View site button at the bottom of the left hand side panel. You should be able to see your blog post there.

Great! So now we have our content ready for our awesome blog. This blog is currently being served by the Node.js server that we ran using npm start command. As of now, we cannot host our blog on AWS S3 because it is not a static website. Its content is being served dynamically by the Node.js server. In order to host our blog on S3, we will need to generate the static website corresponding to our blog. This means, we need all the files as static HTML, CSS and JS files that we can upload to S3. That's what we are going to do in our next step using HTTrack

Step 4 — Generating assets for static website from your locally hosted ghost blog using HTTrack

We will be using the concept of website mirroring for generating our static website from our blog. Mirroring a website means we crawl all pages of a website to create a replica of the same on our local machine. While doing this for large complex websites, it can be complicated. But since our blog is a simple site with simple content and it is hosted locally, it is very simple to mirror our blog. There are many tools for mirroring a website. For example, wget, HTTrack, etc. For our tutorial, we will be using HTTrack.

HTTrack is a command line utility for accessing websites and downloading content from web. It is typically used for mirroring websites to create their copy on your local machine. In order to install HTTrack, visit their download page and follow the instructions to install as per your OS. For Ubuntu, you can simply install it by running the following command in terminal:

$ sudo apt-get install webhttrack

Verify the installation by running the following command in your terminal:

$ httrack --version

If installed, it will display the version of the HTTrack installed on your machine. Once you have HTTrack installed, let us use it to mirror our blog which is being served dynamically.

Make sure that your Ghost Node.js server is running in terminal (the npm start command that we executed earlier). Keep it running and open another terminal window or tab. Go to my-awesome-blog directory and from that directory, run the command:

$ httrack

This will start a wizard for mirroring your blog. It will ask you for project name. Enter the name as static-website.

Then it will ask for base path. Enter dot (.) there. This tells that we need to copy the static assets insided the my-awesome-blog/static-website directory.

Then it will ask for URLs. Enter the url of the blog on your localhost, that is http://codingfundas.com/

Then it will ask for options:

Action:
(enter)	1	Mirror Web Site(s)
	2	Mirror Web Site(s) with Wizard
	3	Just Get Files Indicated
	4	Mirror ALL links in URLs (Multiple Mirror)
	5	Test Links In URLs (Bookmark Test)
	0	Quit

Enter 1 to mirror the website. For the next options, just skip them by pressing ENTER.

Finally it will ask if you are ready to launch the mirror. Type Y and press ENTER.

It will then start the mirror and let you know once completed. Here is the screenshot

Screenshot-from-2018-06-03-17-12-07

This will crawl our entire blog. It will create a directory static-website inside the my-awesome-blog directory. Inside the static-website directory, there will be another directory localhost_2368. Inside the localhost_2368 directory you will find all the assets and index.html for your static website. The directory structure should look similar to this:

 - my-awesome-blog
  | -static-website
    | - localhost_2368
      | - 
        | - favicon.ico
        | - public
        | - author
        | - assets
        | - hello
        | - index.html

Instead of hello you might see the title of your blog post that you wrote earlier.

Congratulations! You have successfully created the static assets from your Ghost blog. Now you just need to upload these assets to AWS S3 so that the whole world can see your awesome blog.


Note: From this step onwards, you will need an AWS account.

Step 5 — Setting up AWS CLI

AWS CLI is the command line interface which allows you to perform various actions on AWS using your terminal. Once we have created the assets for our static website, we need to upload them to an S3 bucket. While this can be done using the AWS Console on your web browser, copying all the files while maintaining the directory structure will be quite tedious task. Also, everytime we change anything in our blog, we will need to run the httrack command to generate the static website assets again and then sync the updated directory with the S3 bucket (This involves removing deleted files, updating existing files and adding newly added files). Doing this via web browser is not a practical solution. But with the AWS CLI, this can be done with a single command. So, it is quite important for us to configure AWS CLI on our local machine.

For setting up AWS CLI, we need an IAM user in AWS and the Access Key Id and Secret Access Key for that IAM user. IAM stands for Identity Access Management and it is an AWS service which allows us to create users, groups, roles and control which groups have access to perform which actions, which users belong to which groups, etc. In short, all sorts of access controls that we want to impose on our AWS resources can be controlled via IAM. Discussing all the possibilities of what all can be done with IAM is beyond the scope of this tutorial but we don't really need to know everything for our use case. Here is what we are going to do for our use case:

  • Create a group in IAM which has full access to S3. Call this group s3-admins
  • Create an IAM user ghost-blogger and assign this user to the group s3-admins
  • Copy the generated credentials of our user ghost-blogger and use them to setup AWS CLI on our local machine

First of all, login to your AWS Console from your web browser. Then navigate to the IAM > Groups

Screenshot-from-2018-06-03-15-01-49

Click on the Create New Group button. Enter the group name s3-admins and click next. In the next page you will see the option to attach a policy. Policies are contracts which govern all the access controls. In the Policy Type box, type: S3. It will automatically filter all the policies associated with S3. We need to select the policy AmazonS3FullAccess. Any entity (user or group) associated with this policy will have full access to AWS S3 for this AWS account. Look at the screenshot below:

Screenshot-from-2018-06-03-15-05-28

Select the AmazonS3FullAccess policy and click Next Step.

Screenshot-from-2018-06-03-15-06-53

Review the details and click on Create Group button. You should be able to see the newly created group in the group list.

Now, we need to create a user ghost-blogger and link it with this group s3-admins. Go to IAM > Users for this. Click Add User button. This will open up the form for creating a new user. Add the username as ghost-blogger. At the bottom of the form, you will see two checkboxes for selecting which type of access do we want to grant this user. For our case, we only need to select Programmatic access because we will be using this user only for uploading our static assets to S3 bucket from AWS CLI. So, go ahead and select the checkbox Programmatic access and click Next

Screenshot-from-2018-06-03-15-12-13

In the next step, we add this user to the s3-admins group that we had created earlier. Review the information and proceed to create the user.

Very Very Important: Once you create a user, you will be able to see the Access Key ID and Secret Access Key of the user. AWS will also provide an option to download the credentials as a csv file. Do that. Copy the credentials and keep them somewhere safe because AWS will never show the Secret Access Key again. So, make sure that you copy the credentials and keep the somewhere safe.

Did you copy the credentials? Yes? Good. Now let's get back to our local machine. Now we need to configure AWS CLI on our local machine. First, we will need to install AWS CLI on our machine. You can check out the installation instructions here and follow the instructions specific to your operating system. Verify the installation by running the aws --version command from terminal.

Once installed, now we need to configure it to be able to upload our static website assets to s3.

Open the terminal and type the command

$ aws configure

It will prompt you to provide your Access Key ID and Secret Access Key one by one. Paste the credentials that you had copied earlier one by one here. For rest of the questions, you can just press enter. To verify if it is configured correctly, type the following command in terminal:

$ aws s3 ls

This command lists all the buckets in S3 in your AWS account. If you do not have any buckets in your S3, the command will run without any output. If the above command throws no error then it means our awscli is configured correctly. Now we can use it to upload our static website assets to our S3 buckets. But first, we need to create the buckets in our S3 and that is our next step.

Step 6 — Creating and configuring S3 buckets for your domain

Let us say we own a domain called yourdomain.com. We want to serve our website at http://yourdomain.com. And also if any user accesses http://www.yourdomain.com, we want to redirect them to http://yourdomain.com. In order to setup our domains this way, we will need to create two buckets on S3:

  • yourdomain.com
  • www.yourdomain.com

Please note that you will need to create the buckets as per the domain name you own. Here yourdomain is just a placeholder. For example, in my case I had to create the buckets codingfundas.com and www.codingfundas.com.

Once you have created the buckets, click on the yourdomain.com bucket. Then click on Properties tab. Click on Static website hosting and then select the box Use this bucket to host a website. In the index document field, fill index.html and click Save.

Screenshot-from-2018-06-03-15-55-04

Note down the url mentioned in the Endpoint (the one masked in the above screenshot). This is the public url of your website and it will look something like this:

http://yourdomain.com.s3-website.ap-south-1.amazonaws.com

Instead of ap-south-1, you might have something else in your case depending on the region you chose for your S3 bucket. Note it down and save it somewhere. We will need it later to access our website and map our domain to S3 bucket.

Then click on Permissions tab and then click on Bucket Policy. Paste the following policy in the box below (replace yourdomain with the actual domain name):

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "PublicReadForGetBucketObjects",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::yourdomain.com/*"
        }
    ]
}

Click Save. If you entered the policy correctly, it will save. Otherwise it will throw an error. As mentioned earlier, policies in AWS are contracts which dictate the access control for AWS resources. The above policy dictates that our bucket yourdomain.com and all the objects in that bucket can be accessed by anyone on the internet. It only provides read only access to the public so that nobody else (apart from our ghost-blogger user) can write to this bucket. Upon saving the policy, you might get a warning telling you that this bucket has public access. That's ok since we are using this bucket for hosting a website so we need it to have public access. AWS gives that warning because if we were using this bucket for some other purpose instead of hosting a website, then it would be a bad idea to grant public access to this bucket. For example, if we were using the S3 bucket to store the personal details of the users of our application, then it would be a disaster if that bucket had public access. Hence the warning. But we dont need to worry about it since we are using this bucket for hosting a website.

Once we have configured our bucket yourdomain.com, we then need to configure the other bucket www.yourdomain.com to redirect the traffic to yourdomain.com. So, from S3 buckets listing, click on the bucket www.yourdomain.com and then click on Properties tab and then Static website hosting. Check the option Redirect requests and then in the Target bucket or domain field, enter the name of the other bucket, that is yourdomain.com and in the Protocol field, enter http.

Screenshot-from-2018-06-03-16-11-17

Click Save. Note down the url mentioned in the Endpoint (the one masked in the above screenshot). This is the public url of your website and it will look something like this:

http://www.yourdomain.com.s3-website.ap-south-1.amazonaws.com

Instead of ap-south-1, you might have something else in your case depending on the region you chose for your S3 bucket. Note it down and save it somewhere. We will need it later to access our website and map our domain to S3 bucket.

This completes our setup of S3 buckets.

Now we are ready to add the assets of our static website to our S3 bucket. Since we have configured yourdomain.com as our main bucket and www.yourdomain.com as the redirect bucket, we need to upload our assets in the yourdomain.com bucket. And that's what we are going to do in the next step.

Step 7 — Deploying your static website assets (generated by HTTrack) to your S3 bucket

Remember the time we created assets for our static website using HTTrack ? :-)

Good. We need to use those now. Now we need to deploy those static assets to our s3 bucket using the awscli. Open your terminal and go to directory my-awesome-blog. If you followed all the instructions properly, you should have a directory named static-website inside that my-awesome-blog directory and a directory named localhost_2368 inside the static-website directory. In short, the path should be like this

$HOME/my-awesome-blog/static-website/localhost_2368

Make sure that directory exists. Then, from your my-awesome-blog/static-website directory, execute the following command:

$ aws s3 sync localhost_2368 s3://yourdomain.com \ 
  --acl public-read \
  --delete

Let us see what this command does.

  • aws s3 sync: is a command that synchronizes the source with the target. Here, source is the sub-directory localhost_2368 which contains the assets for our static website. And the target is our S3 bucket yourdomain.com.
  • --acl public-read: This grants public read access to all the files being uploaded to the bucket.
  • --delete: If any files are present in the S3 bucket which are no longer present in the source directory, then delete them from the S3 bucket.

The command will print the logs to the terminal, showing that it is uploading the files from your local machine to your S3 bucket. Once it completes, go to your AWS console on your browser and check the S3 buckets to see if the newly added assets appear there or not. You might need to refresh the page. Please note that assets will only appear in yourdomain.com bucket and not on the www.yourdomain.com bucket since they were uploaded only on yourdomain.com bucket.

Remember earlier I asked you to copy the Endpoint URLs for your S3 buckets which looked something like these:

http://yourdomain.com.s3-website.ap-south-1.amazonaws.com
http://www.yourdomain.com.s3-website.ap-south-1.amazonaws.com

If you visit any of these urls, you should be able to see your blog there. Go and celebrate! Your awesome blog is now published on the internet. Call your friends and tell them that they can visit your blog at:

http://yourdomain.com.s3-website.ap-south-1.amazonaws.com

Well, chances are that after calling a few friends you'll realize that the name of your blog is quite long and hard to remember, even for yourself. Won't it be nice if you could serve your blog directly from:

http://yourdomain.com

That's quite easy to remember. Indeed! That will be awesome! And that's exactly what we're going to do in our next and final step.

Step 8 — Pointing your Godaddy domain to your S3 bucket using Route 53

AWS Route 53 is a Domain Name System. While a discussion of Domain Name Systems and name resolution is beyond the scope of this tutorial, you need to know that we will need to take following steps to point our domain from Godaddy to our S3 buckets:

  • Create a hosted zone in Route 53
  • Connect this hosted zone to our S3 buckets by adding record sets
  • Copy the Name servers from our hosted zone and update the same in our Godaddy domain's settings

Here I am writing with the assumption that you purchased the domain from Godaddy. However, if you purchased from some other domain registrar, process should be the same.

So, first we go to AWS Route 53. Click the button Create Hosted Zone. Enter your domain name in the text box without www prefix. In our example, it will be yourdomain.com.

Screenshot-from-2018-06-03-17-55-49

By default it will create two record sets, one with type NS and other with type SOA. Will look like this:

Screenshot-from-2018-06-03-18-00-21

Note that that type NS record will have 4 values for name servers. You need to copy all 4 of these and save them somewhere. We will need these while editing the domain settings in Godaddy.

Ok we have created a hosted zone. Now we need to link our s3 buckets with this hosted zone. For that, click on Create Record Set button.

Leave the Name as blank. It will be yourdomain.com by default.
Then leave the Type as A - IPv4 address.
For Alias, select Yes.
Then click in the Alias Target textbox. You should automatically see the name of your S3 bucket yourdomain.com. Select it.
Leave the Routing Policy as Simple and
Evaluate Target Health as No

The screenshot below contains the name yourawesomedomain instead of yourdomain as the S3 bucket by that name was not available. So for demo purposes, I created another hosted zone by the name of yourawesomedomain.com

Screenshot-from-2018-06-03-18-08-57

Finally click on Create button at the bottom. This will link our bucket yourdomain.com to our hosted zone. We need to also link our second bucket www.yourdomain.com. For that, again click on Create Record Set button.

This time, in the Name field, enter www
Then leave the Type as A - IPv4 address.
For Alias, select Yes.
Then click in the Alias Target textbox. This time, scroll to the bottom till you see the section Record sets in this hosted zone and under that you should see the entry for yourdomain.com. You need to select that.

Screenshot-from-2018-06-03-18-17-39

Leave the Routing Policy as Simple and
Evaluate Target Health as No

Finally click on Create button at the bottom.

Now our hosted zone should have 4 record sets. These include two Type A records, one Type SOA record and one Type NS record. This completes our config at AWS Route 53. Now we need to go to godaddy and change the settings there to point our domain to the S3 buckets. The Type NS record in Route 53 has 4 name servers in its value field. Earlier I asked you to copy them. Now we will need those.

Login to your Godaddy account and navigate to the products page You should see all your domains over there. Look for the domain that you need to point to the S3 bucket and click on the DNS button beside it.

Screenshot-from-2018-06-03-18-27-58

That will open a page with settings for your domain. Scroll down to the Nameservers section and click the Change button. Select Custom from dropdown. Then one by one, enter the 4 name servers url copied earlier from Route 53.

Screenshot-from-2018-06-03-18-32-06

Click Save

That's it! This completes our config at Godaddy as well. This might take a few minutes to reflect. Afterwards you should be able to visit your blog from http://yourdomain.com and http://www.yourdomain.com.

Conclusion

Congratulations! You have successfully understood how to use Ghost as a blogging engine for creating your own blog and also how to significantly reduce the cost of hosting your blog by hosting it on AWS S3 as a static website. Whenever you make any changes to your blog, you need to run Step 4 and Step 7 to make it live on your domain. It hardly takes a minute. This is how I hosted my blog and at the time of writing this tutorial, my blog is one week old. I am still exploring Ghost to see what all cool stuff I can do with it. I hope this tutorial was not just a step by step walkthrough and helped you understand all the concepts involved.

If you liked this tutorial and it helped you, please share it on social media and help others by using the social media sharing buttons at the bottom. If you have any feedback, please feel free to mention those in the comments section below.

Happy Coding :-)

]]>
<![CDATA[Node.js AWS SDK: How to list all the keys of a large S3 bucket?]]>

Let's say you have a big S3 bucket with several thousand files. Now, you need to list all the keys in that bucket in your Node.js script. The AWS SDK for Node.js provides a method listObjects but that provides only 1000 keys in one API call. It does

]]>
http://codingfundas.com/node-js-aws-sdk-how-to-list-all-the-keys-of-a-large-s3-bucket/5b0aa30be869c112976b5b33Sun, 27 May 2018 13:58:33 GMT

Let's say you have a big S3 bucket with several thousand files. Now, you need to list all the keys in that bucket in your Node.js script. The AWS SDK for Node.js provides a method listObjects but that provides only 1000 keys in one API call. It does however, also send a flag IsTruncated to indicate whether the result was truncated or not. If the response contains IsTruncated as true, then it means you need to call the listObjects again, but this time, you need to pass a Marker in your parameters which tells AWS:

Hey, I've received the list of objects upto this Marker object, send me the ones after this one please. Thanks!

We'll use this idea to write our code in a simple and easy to understand manner using Javascript's one of the new features of ES8 called Async/Await. For that to work, you will need Node.js version 8 or higher.

First, I'll show you the script and then we will break it down to understand what it is doing. So, here's the code:

const AWS = require('aws-sdk');

const s3 = new AWS.S3({
  region: 'eu-central-1',
  accessKeyId: process.env.AWS_ACCESS_KEY_ID,
  secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
});

async function listAllObjectsFromS3Bucket(bucket, prefix) {
  let isTruncated = true;
  let marker;
  while(isTruncated) {
    let params = { Bucket: bucket };
    if (prefix) params.Prefix = prefix;
    if (marker) params.Marker = marker;
    try {
      const response = await s3.listObjects(params).promise();
      response.Contents.forEach(item => {
        console.log(item.Key);
      });
      isTruncated = response.IsTruncated;
      if (isTruncated) {
        marker = response.Contents.slice(-1)[0].Key;
      }
  } catch(error) {
      throw error;
    }
  }
}

listAllObjectsFromS3Bucket('<your bucket name>', '<optional prefix>');

The above script will print all the keys from the bucket matching the prefix that you provided. If you want to do something useful with the objects instead of just printing them to the console, you can easily tweak the above script to do that.

Now let us break it down to smaller parts and understand what each part is doing. Starting from the top:

const AWS = require('aws-sdk');

const s3 = new AWS.S3({
  region: 'eu-central-1',
  accessKeyId: process.env.AWS_ACCESS_KEY_ID,
  secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
});

This is quite simple. We are importing the module aws-sdk and then instantiating an s3 client using the accessKeyId and secretAccessKey from our environment variables. Now, instead of process.env.AWS_ACCESS_KEY_ID, we could also use a hard coded value of our access key id but I won't recommend that because of security concerns. It's always good to separate configuration from code and it's also a good practice to provide credentials via enviroment variables. In my script, I am using eu-central-1 region of AWS but you can change that to the region where your S3 bucket is. Now that we have our s3 client instantiated, we can now call S3 related methods of the AWS API. For our problem, we just need one method and that is s3.listObjects.

Let us take a look at the next section of our script:

async function listAllObjectsFromS3Bucket(bucket, prefix)

listAllObjectsFromS3Bucket is an asynchronous function which expects two parameters, bucket and prefix. An important thing to note here is the use of keyword async before the function keyword. This is necessary because we are using the await keyword inside this function. For the ES8 Async/Await to work, whenever we use await in a function, that function must have the async keyword as a prefix to the function definition. If you remove the async keyword from the function definition, you will get a SyntaxError. Feel free to try that.

Now if you look inside the while loop in the function definition, you will see the line:

const response = await s3.listObjects(params).promise();

Let's understand what this line of code is doing. If you check the AWS documentation for s3.listObjects method, you will see that the function expects two arguments. First is the params and second is the callback. But in our code we only provided params and no callback. Why is that?

Because almost all the AWS SDK methods also support promises. I would almost always recommend using promises instead of callbacks because of several advantages of promises over callbacks. We won't get into that discussion in this tutorial but let us see how we can use the AWS SDK methods to return promises instead of passing a callback function to them. Well, as you can see in our code, it is quite simple. We just need to do two things:

  • Omit the callback argument from the function call
  • Call the .promise() method to get the promise

So, the result of s3.listObjects(params).promise() will be a promise which will also have a then method of its own. Just to give you a clearer picture, consider the following code snippet which uses callback:

s3.listObjects(params, function (error, data){
    // do something with error and data here
});

We can convert this to promise based code like below:

s3.listObjects(params).promise()
    .then(function (data){
        // do something with data here
    })
    .catch(function (error) {
        // handle your error here
    });

Async/Await takes it to the next level. Whenever a function returns a promise, we can use Async/Await with that function. So the above code can be re-written using Async/Await like below:

try {
    const data = await s3.listObjects(params).promise();
    // do something with data here
} catch(error) {
    // handle your error here
}

Now for someone who is new to Javascript, the Async/Await solution will be much easier to understand. The beauty of Async/Await is that it lets you write asynchronous code as if it's synchronous code. Although it has some pitfalls but in many of the cases, it makes your code much easier to understand. Please note however that the try/catch block in the above code snippet must exist inside some function with async prefix.

Now coming to the next section of our code:

response.Contents.forEach(item => {
  console.log(item.Key);
});

Here we are just logging the Key of every item in the response to stdout.

isTruncated = response.IsTruncated;
if (isTruncated) {
  marker = response.Contents.slice(-1)[0].Key;
}

isTruncated is a flag on which our while loop is based. It is initialized to true so that first iteration executes. In subsequent iterations, its value depends on the response.IsTruncated as returned by the AWS SDK. When, isTruncated is true, we are assigning the Key of the last element in the response to our variable marker. The listObjects function accepts a parameter called Marker inside the params object. If the Marker is not provided, it starts fetching the list of objects from beginning. However, whenever Marker is provided, it starts fetching the list of objects after that element. The expression

response.Contents.slice(-1)[0].Key;

will return the Key of the last element of the response.Contents array. In each iteration of the while loop, we are setting our marker to key of the last element of the response. When we reach the end of the list, response.IsTruncated will be false and our code will exit the while loop.

And that's how we can list all the objects in an S3 bucket with large number of objects in it. Happy Coding :-)

]]>
<![CDATA[Callback? What's that?]]>http://codingfundas.com/javascript-callbacks-for-beginners/5b0940ebc2962c1011ec2094Sat, 26 May 2018 11:22:04 GMT

If you are new to Node.js, some of the things can be a bit confusing and overwhelming in the beginning. You might have several questions like:

  • What is the V8 engine?
  • What is the Event Loop?
  • How does asynchronous code run in Node.js?
  • What's a callback?
  • What are Promises?
  • What is Async/Await?

In this tutorial, we will address one of these questions:

What's a callback?

Callbacks

What are callbacks?

They are just functions.

Why do we call them callbacks then?

Well, its just a terminology we Javasript developers use for functions which are used in some special cases, usually involving asynchronous code execution. Usually in Node, we pass a function as an argument to an asynchronous function and we need this function (the one being passed) to execute inside the function to which we are passing it once the asynchronous operation completes. We call such functions (ones being passed as arguments) as callbacks. Makes sense?

Ok I take that as a "NO". Let me take a step back. Like many of the high level languages, functions are first class objects in Javascript.

What does that mean?

It means that you can treat functions same way as you would treat an object, a number or a string. You can assign a function as a value to a variable. Example:

var aNumber = 1;
var aString = 'Just a string';
var aNotSoUsefulFunction = function () {
    console.log('Hello, World!');
}

See? You can assign a function as a value to a variable, just like you can assing a number or a string as a value to a variable.

Hmm. Ok. I understand that. But what does it have to do with callbacks?

Wait. We'll get there shortly. First I need to make sure you understand the concept that functions are first class objects in Javascript. Another cool thing that we can do with functions is that we can pass them as arguments to another function. Same as you can pass a number or a string as an argument to a function. I'll show that to you shortly. First take a look at the function below:

// This function takes a string as an 
// argument and prints that to console
function printGreeting(name) {
    console.log('Hello', name);
}

printGreeting('Jerry');
printGreeting('Newman!');

In the above code snippet, we are passing a string as the argument name to the function printGreeting and that function is just printing the greeting to the console. Our function works nicely as long as the argument name is a string. Now, Javascript is a dynamically typed language. Which means just by looking at our function definition, we cannot tell that name argument is going to be a string or something else. The intended behaviour of our function is such that it expects the name to be a string, but there is no such type checking inside the function. We can pass whatever type we want. Although our function will not work as we want it to work. For example, we can pass a number as name. Example:

printGreeting(123);

Go ahead, try that. It doesn't fail. It will just print Hello, 123 as the output. What do you think will happen if we pass another function as the name argument. Here is what I am talking about:

function emptyFunction () {
    // this function doesn't do anything
}

function printGreeting(name) {
    console.log('Hello', name);
}

printGreeting(emptyFunction);

Did you try that? No? Please do and see what happens. Our printGreeting function won't fail even in this case. It will just print the function definition of our emptyFunction in this case. The point I am trying to highlight here is that functions can be passed as arguments to other functions in Javascript. In our code snippet above, we first defined the emptyFunction and then passed it as an argument to the printGreeting function. But we could also just define the emptyFunction on the fly while passing it as an argument. The above code can be rewritten as:

function printGreeting(name) {
    console.log('Hello', name);
}

printGreeting(function emptyFunction() {
    // this function doesn't do anything
});

In fact, we do not even need to name our emptyFunction here. So the code can be again rewritten as:

function printGreeting(name) {
    console.log('Hello', name);
}

printGreeting(function () {
    // this function doesn't do anything
});

Such functions are called anonymous functions in Javascript and you will see them everywhere in any Javascript code. It's very important that you understand the example above. If you didn't then please read it again until you do.

Our example here is kinda useless but we can do a lot of cool stuff by passing functions as arguments to other functions. We will gradually come to the useful functions but for now, here is another not so useful function:

function runAfterCountingToTen(functionToExecute, argumentToPass) {
    for (let i=1; i<=10; i++) {
        console.log(i);
    }
    functionToExecute(argumentToPass);
}

function printGreeting(name) {
    console.log('Hello', name);
}

runAfterCountingToTen(printGreeting, 'Jerry');

function printSquareOfNumber(n) {
    console.log(n*n);
}

runAfterCountingToTen(printSquareOfNumber, 7);

Here we have defined a generic function runAfterCountingToTen which expects two arguments:

  • A function to execute (example: printGreeting)
  • An argument that needs to be passed to that function (example: Jerry)

Our function will first print the numbers from 1 to 10 and then call the function passed in the first argument with the argument which is passed as the second argument. That is, in first case it will call:

printGreeting('Jerry');

And in the second case, it will call:

printSquareOfNumber(7);

What we are doing here is that we are passing a function functionToExecute (example printGreeting) to another function runAfterCountingToTen and we are executing our functionToExecute inside the runAfterCountingToTen.

The functionToExecute here represents a callback function. So, our printGreeting and printSquareOfNumber are both callback functions. Now let us revisit the definition that seemed confusing at the beginning of our tutorial:

Usually in Node, we pass a function as an argument to an asynchronous function and we need this function (the one being passed) to execute inside the function to which we are passing it once the asynchronous operation completes. We call such functions (ones being passed as arguments) as callbacks. Makes sense?

Does that make sense now?

Well, kinda. What's an asynchronous function?

Very good question. You see, here in our example, runAfterCountingToTen is not an asynchronous function. That's why I said "usually". Usually we use callbacks in Node.js when there is an asynchronous operation. But just the presence of a callback does not mean the function is asynchronous. Our runAfterCountingToTen is a synchronous function and it expects a callback as its first argument.

But what is an asynchronous function? And how does it differ from a synchronous function?

I understand your curiosity and we will definitely visit that topic in a different tutorial. But for now let the concept of callbacks sink in. Let me give you an assignment. Make sure that you first try to solve it on your own before scrolling donw to see the solution. So, here is the assignment:

Create a file called myFile.txt and enter some random content to it. Write a function which expects a filepath as its argument and prints the size of the file in bytes. Then pass this function and file myFile.txt as arguments to our function runAfterCountingToTen to see if it gives the desired output.

Hint: You need to use the fs.statSync function of Node.js

Solution:

.
.
.
.
.
.
.
.
.
.
.

const fs = require('fs');

function runAfterCountingToTen(functionToExecute, argumentToPass) {
    for (let i=1; i<=10; i++) {
        console.log(i);
    }
    functionToExecute(argumentToPass);
}

function printFileSize(filePath) {
    const fileSizeInBytes = fs.statSync(filePath).size;
    console.log('File Size in Bytes:', fileSizeInBytes);
}

runAfterCountingToTen(printFileSize, 'myFile.txt');

That was easy. I hope you were able to solve that?

Now, let us do something a little bit more realistic. Here is your second assignment:

Write a function called divide which expects three arguments: operand1, operand2 and a callback. Here is how the function signature should look like:

function divide(operand1, operand2, callback) {
    
}

callback is a function which expects two arguments. first argument is the error and second argument is result. Here is what its function signature should look like:

function myCallback(error, result) {

}

Inside the divide function, we need to check if operand2 is zero or not. If not, then execute the callback with first argument(error) as null and second argument as the result of the division of operand1 by operand2. If the operand2 is zero, then you need to execute the callback function with first argument as an error 'Error: Division by zero' and the second argument as null. Inside the myCallback function, check if error is not null. It it is not null, print the message 'An error occured during division:' then followed by the error message and then return. Else if error is null then print the result.

Solution:

.
.
.
.
.
.
.
.
.
.

function myCallback(error, result) {
    if (error) {
        console.log('An error occured during division:', error.message);
        return;
    }
    console.log(result);
}

function divide(operand1, operand2, callback) {
    if (operand2 !== 0) {
        const result = operand1 / operand2;
        callback(null, result);
    } else {
        callback(new Error('Error:  Division by zero'));
    }
}

divide(10, 2, myCallback);
divide(10, 0, myCallback);

You can also define the myCallback function as an anonymous function while calling the divide function. In that case the solution becomes like this:


function divide(operand1, operand2, callback) {
    if (operand2 !== 0) {
        const result = operand1 / operand2;
        callback(null, result);
    } else {
        callback(new Error('Error:  Division by zero'));
    }
}

divide(10, 2, function (error, result){
    if (error) {
        console.log('An error occured during division:', error.message);
        return;
    }
    console.log(result);
});

divide(10, 0, function (error, result){
    if (error) {
        console.log('An error occured during division:', error.message);
        return;
    }
    console.log(result);
});

Although we are repeating code here which is not a good thing. But I just wanted to share this style of passing the callback as an anonymous function because it is very common style and you will see it quite often while looking at any Node.js code snippet.

Please also remember that our divide function is still a synchronous function even though its function signature looks like that of an asynchronous function. I'll get to asynchronous functions in a different tutorial but first I just wanted to make sure you understand the concept of callbacks and get comfortable with that. Many beginners often find callbacks confusing and also sometimes assume that if a function accepts callback as an argument then it will be an asynchronous function. That's not true though. Callbacks do not imply asynchronous code and should not be confused with that.

I hope this tutorial solidifies your understanding of callbacks in Javascript. Happy coding :-)

]]>