Amazon EC2 Cloud with LAMP Stack and Loopback CRUD API - Step 2

 This is a continuation Blog describing installation of Loopback CRUD API in EC2 running LAMP Stack. This blog would let you create a CRUD API on your database using Loopback in Node.js

 Amazon EC2 Linux Loopback, Amazon EC2 Linux Node.js, EC2 Linux Loopback, EC2 Linux Node, EC2 Linux NodeJs, EC2 Linux PhpMyAdmin, EC2 Linux LAMP, EC2 Loopback, EC2 Node, EC2 NodeJs, EC2 PhpMyAdmin, EC2 LAMP Stack

  Amazon Web Service    |      Anupam Chakraborty    |      Feb 07 2019 02:34 PM

 Login to Like      Login to Follow      308 Views

Introduction


We have already seen in my previous blog in Amazon EC2 Cloud with LAMP Stack and Loopback CRUD API, how to set up an EC2 with Amazon Linux 2 and install LAMP Stack [Apache / PHP / MySQL (MariaDB)] along with PhpMyAdmin.

This is the second part of the blog which will create a database with 2 tables, add a few records and then install Node.js, loopback and finally create the loopback CRUD API.

This would be, just like the previous one, a hands-on session, so I would expect you to work along with going through the blog. That way you can understand it better.


Goals

  • Create a Database, 2 Tables and insert some records in our database
  • Install Node.js in our EC2 instance
  • Install Loopback in our EC2 instance
  • Create a Loopback CRUD API and access this from Internet.


Step1: Create a Database, 2 Tables and some records.


To start with we would like to login to our MySQL Database. We do have PhpMyAdmin installed, we can complete this step in 2 ways. We can go to the Putty, login to our Shell script and trigger the necessary SQL statements. Or we can also do the same from the PhpMyAdmin client. I would login to Putty to hit the SQL, but you welcome to try the other methods. The SQL statement will be almost the same.

First let us login to the EC2 Instance that we created in our previous blog.




Once you have logged in, connect to the MySQL client using the command


mysql -u <UserId> -p
<Pasword>


This should log you in to the MySQL Client and you can now start executing your SQL Commands.



Let us first create a database

create database usrtran;
MariaDB [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| #mysql50#.rocksdb  |
| information_schema |
| mysql              |
| performance_schema |
| usrtran            |
+--------------------+


Now let us go into the database we created and create 2 tables called user master and user transaction.


use usrtran
create table usrmas (usrid int not null primary key, vName varchar(30) not null, vCity varchar(20), vZip varchar(5));
create table usrtrans (tranid int not null primary key, usrid int, debit float, credit float);
Now let us insert some records into the tables.
insert into usrmas values (1, 'John Doe', 'Atlanta', '30303');
insert into usrmas values (2, 'Jane Doe', 'New York', '11005');
insert into usrtrans value (1,1,1000.00,0);
insert into usrtrans value (2,1,0,40.50);
insert into usrtrans value (3,2,3000.00,0);
insert into usrtrans value (4,2,0,520.30);
Once this is done, we can check what is in our table using the following commands.
select * from usrmas;
select * from usrtrans;


You should see the entries in the table like below:


This can also be seen in the PhpMyAdmin as below



Step 2: Install Node.js in your EC2 instance.


Now that we have created our test database and the tables, it is time to install Node.js along with its dependencies. Let us head back to our Putty Shell and install the needed dependencies. To do this we will first install NVM or Node Version Manager. This is an amazing tool using which you can install multiple version of node.js in your computer and allow us to switch between them.


Let us first install NVM from Github


[ec2-user@ip-172-31-81-193 ~]$ curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.32.0/install.sh | bash


Next step is to activate the NVM


[ec2-user@ip-172-31-81-193 ~]$. ~/.nvm/nvm.sh


Next we will install Node.js the latest LTS version. I am installing Node 10.

[ec2-user@ip-172-31-81-193 ~]$ nvm install 10.15


Once installed you can verify which version of node is currently active by giving the command

[ec2-user@ip-172-31-81-193 ~]$ node -e "console.log('Running Node.js ' + process.version)"


This should so you which version of the Node.js is currently running.


Step 3: Install Loopback in your EC2 instance


Now that we have successfully installed Node.js in our EC2 instance, let us install Loopback. LoopBack is a highly-extensible, open-source Node.js framework developed and supported by an American Company called Strongloop which was acquired by IBM in 2015. IBM continues to contribute and support the StrongLoop community through these projects that provide key technologies for the API economy.

For our case, we will install Loopback 3 which is also the LTS version.


[ec2-user@ip-172-31-81-193 ~]$ npm install -g loopback-cli


That is it, we have installed everything that we need to continue with our API Creation. So let us go ahead and start creating our app now.


Step 4: Create a Loopback CRUD API and access this from Internet.


Create App


Let us start our loopback app. To do this we hit a command called lb


[ec2-user@ip-172-31-81-193 ~]$ pwd
/home/ec2-user
[ec2-user@ip-172-31-81-193 ~]lb app database-crud
? Enter name of the directory to contain the project: database-crud
  create database-crud/
    info change the working directory to database-crud

? Which version of LoopBack would you like to use? 3.x (Active Long Term Support)
? What kind of application do you have in mind? empty-server (An empty LoopBack API, without any configured models or datasources)
Generating .yo-rc.json


Once completed change directory into database-crud


[ec2-user@ip-172-31-81-193 ~]$ cd database-crud
[ec2-user@ip-172-31-81-193 database-crud]$ npm install --save loopback-connector-mysql


Create Datasource


Next step is to create a datasource to connect to the database. Let us create a Datasource called usrtranDS

[ec2-user@ip-172-31-81-193 database-crud]$ lb datasource usrtranDS
? Enter the datasource name: usrtranDS
? Select the connector for usrtranDS: MySQL (supported by StrongLoop)
? Connection String url to override other settings (eg: mysql://user:pass@host/d
b):<Enter>
? host: localhost
? port: 3306
? user: anupam
? password: [hidden]
? database: usrtran


Once this is completed, you can see the datasource created using the file server/datasources.json



Create Model


Next step is to create 2 model as per our 2 tables, usrmas and usrtrans.


For now, we will create a dummy model without any fields or structure in it.

[ec2-user@ip-172-31-81-193 database-crud]$ lb model usrmas
? Select the datasource to attach undefined to: usrtranDS (mysql)
? Select model's base class Model
? Expose usrmas via the REST API? Yes
? Custom plural form (used to build REST URL):
? Common model or server only? common
Let's add some usrmas properties now.

Enter an empty property name when done.
? Property name:
[ec2-user@ip-172-31-81-193 database-crud]$


Note that I have not added anything into the property name. this would create a dummy model without any data in this. We would similarly create another model called usrtrans. Once you are done, you can see the model JSons created inside the folder common/models.



Our next step would be to create the actual Model based on the table. We would use a shortcut to do this. We would use a java script file that takes care of creating the model based on the database.

To do this, we need to create a folder. Type the following commands


[ec2-user@ip-172-31-81-193 database-crud]$ cd server
[ec2-user@ip-172-31-81-193 server]$ mkdir bin
[ec2-user@ip-172-31-81-193 server]$ cd bin
[ec2-user@ip-172-31-81-193 bin]$ wget https://www.wedointegration.com/tutorialdocs/discover-schema.tar.gz
[ec2-user@ip-172-31-81-193 bin]$ tar -xvzf discover-schema.tar.gz
[ec2-user@ip-172-31-81-193 bin]$ rm discover-schema.tar.gz
[ec2-user@ip-172-31-81-193 bin]$ ls -lrt


Now let us try to create the models based on our database.


[ec2-user@ip-172-31-81-193 bin]$ node discover-schema.js -ds usrtranDS -sn usrmas



Create model for the second table using the command


[ec2-user@ip-172-31-81-193 bin]$ node discover-schema.js -ds usrtranDS -sn usrtrans


Now that we have completed the models let us go back to the model folder and check the json file.


The usrmas.json file would now look like this:


{
    "name": "usrmas",
    "options": {
          "idInjection": false,
          "mysql": {
               "schema": "usrtran",
               "table": "usrmas"
          }
    },
    "properties": {
          "usrid": {
               "type": "Number",
               "required": true,
               "length": null,
               "precision": 10,
               "scale": 0,
               "id": 1,
               "mysql": {
                     "columnName": "usrid",
                     "dataType": "int",
                     "dataLength": null,
                     "dataPrecision": 10,
                     "dataScale": 0,
                     "nullable": "N"
               }
          },
          "vName": {
               "type": "String",
               "required": true,
               "length": 30,
               "precision": null,
               "scale": null,
               "mysql": {
                     "columnName": "vName",
                     "dataType": "varchar",
                     "dataLength": 30,
                     "dataPrecision": null,
                     "dataScale": null,
                     "nullable": "N"
               }
          },
          "vCity": {
               "type": "String",
               "required": false,
               "length": 20,
               "precision": null,
               "scale": null,
               "mysql": {
                     "columnName": "vCity",
                     "dataType": "varchar",
                     "dataLength": 20,
                     "dataPrecision": null,
                     "dataScale": null,
                     "nullable": "Y"
               }
          },
          "vZip": {
               "type": "String",
               "required": false,
               "length": 5,
               "precision": null,
               "scale": null,
               "mysql": {
                     "columnName": "vZip",
                     "dataType": "varchar",
                     "dataLength": 5,
                     "dataPrecision": null,
                     "dataScale": null,
                     "nullable": "Y"
               }
          }
    }
}


As you can see, we have the model file now created with all the necessary information from the MySQL database. You can check the other file usrtrans.json for the same.

Let us finally do one configuration change so that we can use the Port 8099 that we have defined earlier while creating our EC2 instance.


Move to the folder server and there should be a file config.json.

Update the field and change port from 3000 to the port that you have opened earlier. In my case it was 8099.



Once this is done go to the home folder of your app and execute the following command.


node .


There are other way to run a Node.js app like nodemon or pm2, but that is not in the scope of this document.



And that is it. Your API is ready for your Database. That was a piece of cake, wasn’t that?

Now hit the URL http://<Your Public Elastic IP Address>:8099/explorer. In my case this was http://3.93.102.33:8099/explorer and you should be able to access the Swagger page for your API. 




You can try out the various resources in your API and the functions to do Insert / Update or Retrieve data.


Conclusion


So, we completed our steps of creating a Database in EC2 and then creating a CRUD API on top of the database. This is advantageous because now, the client would not have to use a JDBC Connection to the database, but can simply do HTTP call to complete all the Database DML operations. Feel free to try this by yourself and let me know if you were able to do this. Let me know if you need me to change anything here.


Comments:

Leave a Comment:

Please login to post comment into this page.

Please login to submit a new Blog.