Winston logger is commonly used logger in Node.js. Most of the case, I use winston logger and also integration outputs of morgan into winston. I recalled that the first time I used winston, I followed some of the online tutorials regarding how to config and use it, but there are always problems here and there for my scenarios. Therefore, I decide to write this to blog to introduce how I config winston logger in my projects.
Project Structure
I normally put the configuration file of winston logger like the following:
For the following settings, please refer to the official site of winston logger. I would emphasize two things:
On the transports part. In the following configuration, I put loggers into files under the directory /log/*.log files. That is a very basic settings, you can of course adding settings like, generate new log files after the current log file reach to certain size, and further config organizing log files according to their generated date.
If not in “production” environment, I set the logger still output on the console for facilitating debugging.
const morgan = require('morgan'); const logger = require('../config/winston');
const morganMiddleware = morgan( // Define message format string (this is the default one). // The message format is made from tokens, and each token is // defined inside the Morgan library. // You can create your custom token to show what do you want from a request. 'combined', // Options: in this case, I overwrote the stream and the skip logic. // See the methods above. { stream: logger.stream }, );
While talking about TDD within developing web systems, we cannot avoid testing RESTful APIs. I have seen some turtorials online (e.g. the articles which discuss about how to do unit test for REST API via leveraging certain npm packages under Nodejs environment) which mention that for Unit Test (hereafter, UT) it will hit database.
Well, in my humble option, in UT we should not hit database. The core idea of unit test is “Isolation“, i.e. isolate the function/methods under testing, presume everthing that the function/methods will interact with works as expected, and see if the function/methods generate the result we are hoping for. Therefore, if hitting database in UT, the testing result of the UT will depend on the status of the database, the coupling between the function/methods and the database makes the test is not a real “Unit Test”. Tests that requires access database should categories as Integration Test (hereafter IT) or End-to-End test. In the rest of this article, I will explain in details regarding how to do UT and IT for RESTful APIs with sourcecode samples provided under Nodejs environment.
Integration Test for RESTful API under Nodejs Environment: using MongoDB and supertest as an example
Assume that we want to test the following REST API: “POST /api/auth/signup“, we have the following code in our project:
To test the api in integration test, we will need to complete two steps:
connect to database for testing purpose only, not database in production environment
test whether the api output expected results
1. Set a Mongodb connection selector between production environment and testing environment
app.js
1 2 3 4 5 6 7 8 9 10 11 12
// According to the value of process.env, choose the connection between prod env and test env const mongodbConnect = process.env.NODE_ENV === "test"? process.env.MONGODB_CLOUD_TEST : process.env.MONGODB_CLOUD_PROD;
constdbConnect = async () => { try { await mongoose.connect(mongodbConnect); winstonLogger.info(`Connect to mongodb of ${process.env.NODE_ENV}`); } catch (err) { winstonLogger.error(`Failed to connect to mongodb due to ${err}`); } };
it('post a new user and respond with 200 and a msg showes that user has been created', async () => { // a real call to the database in testing environment const { text, status } = awaitrequest(app) .post('/api/auth/signup') .set('Accept', 'application/json') .send(tmpUser);
expect(status).to.be.equal(200); expect(text).to.be.equal('User has been created!') }); ... });
As it can be seen from the above code snippet, the supertest makes a call to the database for creating a user profile, and the user profile is actually created in the database under test environment. Since the test involving testing the connection with a real database, I would like to categories this type of tests as integration test.
Then, how the unit test for the REST API should look like?
Unit Test for RESTful API under Nodejs Environment: using MongoDB and supertest + nock as an example
To clearly tell the difference between Integration Test and Unit Test, in this part I will still use supertest as the agent (alternative option can be axios, for example) to file a request to the “POST /api/auth/signup“, but in Unit Test, I will not let the request hitting datacase via http server, instead, I use nock to intercept the request and return the expected result, as shown blow:
describe('POST signup', () => {
const tmpUser = {
name: 'testaa',
email: 'testaa@gmail.com',
password: '1234'
};
it('post a new user and respond with 200 and a msg showes that user has been created', async () => {
// file http request to the REST API
const signupUser = async () => {
return await request('http://test.com')
.post('/api/auth/signup')
.set('Accept', 'application/json')
.send(tmpUser);
};
// mock http server: check if the server received expected params in request.body, then reply expected response
nock('http://test.com')
.post('/api/auth/signup', body => {
expect(body.name).to.be.equal('testaa');
expect(body.email).to.be.equal('testaa@gmail.com');
expect(body.password).to.be.equal('1234');
return true;
})
.reply(200, 'User has been created!');
const { text, status } = await signupUser();
expect(status).to.be.equal(200);
expect(text).to.be.equal('User has been created!')
});
...
});
In the above unit test, I use nock to mock a http server. The supertest still file a request, but the request will be intercepted by the mock server and return expected results, instead of reaching a real server and further hitting a real database. This is the way how isolation is achieved: I am not coupling the test with a real database, based on the assumption that all other parts works okay (via mocking), everything is completed within the test method.
A further questions may be asked is that what is point to do a Unit Test like this? Should I include the unit tests in my project? The answer is Yes and No.
No: if you are developing a relatively small system as software vendor for a small business, or you are structuring a new software product that are targeting a small number of users, not aiming for millions or even billions users in the future, then I would say do not bother to add such unit test in your system. The ROI is not worth it, keep the integration test that check the connection with real database is enough.
Yes: if you are developing a system either big or small in a big company, or if you have a software product that is aiming for millions of daily users in the future (which means the product will be owned by a big company), I would say you probably will need to add the unit tests in your system. The reason? I would quote a saying from a movie called <Kingdom of Heaven>: nothing, everything.
I have experiences of both working in academia (at PhD level) and industry (for some of the big names). The transform between the two was a lesson for me and I would like to share it with the ones who are in the situation of the following:
Do not know if should choose academia or industry after undergraduate study or receiving Master’s degree
Thinking of leaving adademia (after completing a PhD - Congratulations! That is a huge achievement already :), or quiting a PhD) and go to the industry, but do not know if it is the right choice.
Academia Way of Getting Things Done
Academia way of getting things done is to learn first (learn things related to your research area), then try to find the breakthrough. That means, you will need some time to do the preparations before actually begin to do something.
All the secrets behind getting success in acedemia, either as a PhD candicate or a PhD degree owner who are in the way of getting a tenure, to sum up in one sentence is: finding a“niche” (i.e. the so called breakthrough).
So how can I find the “niche” ? You may ask. In my experience, that is two steps work:
First, you will need to build solid foudation in the researcing area. Basically, you will need to have a solid understanding of the common theories in the area you are working on. The stage can be completed by reading a grate a mount of publishes, books… It takes time and is a must step. Imaging you are presenting in a conference, in the Q&A session, someone critisize what you just said by quoting a famous theory in the area, you do not know much about the theory and then you do not know “how to fight back”…
Second, after harnesss yourself with the knowledge you need after stage 1, then you can begin to do your actual work: try to find something new or “breakthrough” in the area that can work on.
In short, in academia, you will be allowed time to ramp up before doing something.
Industrial Way of Getting Things Done
The industrial way of achieving something is not like the academia way. There is very limited time for praparation, it is more like learning by practicing.
While working in the industry, you will have to move much faster, build something tangible that can be evaluate by stakeholders, collect feedbacks, then improve it, then re-evalueate … Therefore, there is no time to do decent preparation, you learn things in the process of solving problems. The key in the industry is to have something that can prove the feasibility of an idea as soon as possible and push to the market.
Conclusion
Thinking about how the GoF Design Patterns was discovered. If it was in Academia, the way of find the patterns would be: tons of readings, then propose some hypothesis, verify it via experiments and finally publish the findings (i.e. design patterns). However, in reality it was not like that, the GoF reviews the previous practices in software industry, then found those patterns. Exactly, the patterns were invented by software engineers in practice without a deliberate purpose (they were found by “variable”. some of most impactful things was invented by accidents, check Anti Fragile.
There are not right or wrong in these two ways of resolveing issues. Because they are in different contexts. In industry, most of the time, we are dealing with practical problem and need to see impacts right away; but in academia, we are dealing with the issues for the future in many case.
I spent a few weeks to develop and set up my own website. The registered domain is root domain (i.e. example.com) and the DNS setting on GoDaddy is to redirect the root domain(example.com) to the www subdomain (e.g. www.example.com). The last step before publishing is to add SSL certificates to my root domain and www subdomain. I found a nice tutorial online regarding how to do it Let’s Encrypt on Heroku with DNS Domain Validation. However, it turned out that follow the intructions I could not generate certificate and private key for my www subdomain. I will show the issue and explain how I resolve it in details.
Lovely "Lock":)
The Issue
My ultimate goal is to let https://www.example.com work on browsers. I have three custom domain in my Heroku app: *.exmaple.com, exmaple.com, www.example.com.
Please enter the domain name(s) you would like on your certificate (comma and/or space separated) (Enter 'c' to cancel):
later at the stage of applying the certificate to Heroku “sudo heroku certs:add –type=…/certificate …/privatekey”, the heroku will not provide domain name www.example.com as an option for the certificate but only example.com. This leads to the result of www.example.com is not secured.
Resolution
The resolution is easy but took me a lot of time to find it. The trick is using wildcard subdomain. At the stage of providing domain name to Let’s Encrypt, I used *.exmample.com, then later when I apply the generated certificate and private key to Heroku, it offers www.exmample.com as an option.
1 2
=== Almost done! Which of these domains on this application would you like this certificate associated with? ? Select domains www.jingjies-blog.com
Then check on Heroku, I can see the following:
The wildcard certificateSecured www.jingjies-blog.com
I planned to add a site visit counter and page view counter into my blog (Hexo + Express.js). And by searching online, it seems that there is very limited information can be found on how to add such things on a Hexo Blog where:
The Hexo blog is integrated into an existing Express server, i.e. the blog should be visited at express_site/blog
The theme I used for the blog does not have such feature added, means I cannot just add something to the _config.yml of the theme and the view counter will work as a charm. I will need to customize it.
There was several options that I had in mind:
Only customize Hexo blog and the theme –> I realize it is not feasible. I do not know how to let Hexo or the theme load customised .js file under Hexo + Express structure;
Load js from Express side –> I finally use this method. Let Express loading related js file as Express static files, the js file will add the data to the view counter in Hexo pages. Blow is what I got in the end.View counter in blog post
Now I will introduce how I achive it in details.
Set up view counter
Set up Firebase (There are tons of tutorial out there, so I am not going to repeat it:))
There is only one thing I would like to emphasize is the database set up. The database should be Realtime Database, and the “rules” is set as below. I will explain in details later why this will not cause security issues.
Firebase Realtime Databse Rules
After register and setting up the Firebase, add the following code to the hexo_blog_directory/themes/{your theme}/layout/xxx.ejs where you want to show the view counter.
After the above steps, the view counter is ready to go :)
Extra issue: is it okay to expose Firebase apiKey?
When you see that I put apiKey etc. in a static .js file, you must have a question in mind: is that a security issue here? I had the same question. Then I found the following article: Is it safe to expose Firebase apiKey to the public?
The answer is a bit long, so I will sum up here: for a small size and not a critical site, it is okay to do thing like this. The apiKey here in Firebase is not like the traditional api keys we know, it is okay to expose it. All you need to do is to set up whitelist site for Firebase (that is why I mentioend earlier in the article that you could set the Firebase database rules for opening read/write public). Here is how you can do it in details:
Go to Firebase Console
Check Authentication Menu -> Sign-in method tab
Scroll down to Authorized domains
Click “Add Domain” button and add your domains (localhost is added as default, and I do not delete as it is needed for local testing)
Set up whitelist site
The above should be okay for a small site. I am not going to over-engineering it, but you can always tailor the security rules accoriding to your business requirements :)
You are more then welcome to cite the content of this blog, but please do indicate the original reference link.
I have a Express server and recently I would like to integrate Hexo blog framework into the existing Express.js site, for example, make the blog accessable on express_site/blog. While exploring the feasibility, I realise that it is hardly to find a thorough tutorial of how to achieve it. There are some information can be found on: Hexo Github Issue 3644 and x-heco-app-connect middleware. However, none of them offer a complete guide of seamlessly integrating Hexo blog into an Express server. For example, following the instruction provided by x-heco-app-connect middleware, the app will generate issues regarding routing. Therefore, I decide to write this blog to offer a tutorial of integrating Hexo into Express, i.e. make a Hexo blog as a route of existing Express server.
Hexo under localhost:3000/blog
My way of achieving it is based on x-hexo-app-connect and some of the steps describe blow are based on the instruction provided by x-hexo-app-connect.
Getting Started
1.1 Make a sub-directory under the express project directory. e.g. we make it under experss_proj/blog, use this command under the project directory:
1
$ mkdir blog
1.2 Enter the command to install hexo-cli globally
1
$ sudo npm i hexo-cli -g
1.3 Go to the blog’s directory (created in step 1.1) and enter the following command in terminal to init Hexo:
1
$ hexo init
1.4 In the blog directory, enter the command in terminal to install x-hexo-app-connect package:
1
$ npm i x-hexo-app-connect --save
1.5 Create a index.js file in the blog’s root directory. e.g. if you want to make it using a bash terminal (Mac platform), enter this command below:
1
$ touch index.js
1.6 Fill in the index.js with the following code:
for the return part, it is to config the Hexo blog, the route is the most important, that is where you will configure how the Hexo blog should be visited from the original Express server. In my case, the blog will be visited via route express_site/blog
1.7 In the app.js (the home js file of Express), add the following to use the x-hexo-app-connect in Express
1 2 3 4 5 6 7
const app = express(); // Put it after app is initialized, as the app will be used as parameter const blogRouter = require('./blog/index')(app);
... app.use(blogRouter);
1.8 This step is very important, it is to set “root” parameter in the _config.yml file under the root directory of Hexo. And set the root as /blog. If not set this root, the Hexo blog home page can be visited via express_site/blog, but when click the link of articles, categories, archives etc. the express will report 404 as the routes cannot be found
1 2 3 4
# URL ## Set your site url here. For example, if you use GitHubPage, set url as'https://username.github.io/project' url: http://localhost:3000/ root: /blog # for locating themes static files (.css etc)
If do not set root as /blog, the following url will be generated as localhost:3000/all-categories which will lead to 404 error../blog/all-categories
1.9 Apply themes. You can just following the set up instruction of each specific theme, then it should be okay. 1.10 After all the above steps, run the Express server, if the terminal showes the following, then it means you are good:)
1 2 3
INFO Validating config INFO Start processing INFO [Hexo-App-Connect] is running on route /blog
Hope this blog helps you in some way :)
You are more then welcome to cite the content of this blog, but please do indicate the original reference link.
It is Jingjie here, it is my first blog on my own blogging site.I have always been wanting to have a place where I can write and post my own blogs. instead of writing things here and there.
The reason of writing blog
I am a passionate software engineer and by nature of this area, I learned lots of thing online (i.e. I received help from others) and wrote down a great number of notes (mainly on my own OneNote) regarding the knowledge I acquired: for strengthening my understanding of thing learned and picking them up in the future.
And for all these days working on academia and industry, I have my own understanding of software, technology etc. I would like to share them with others, and hoping they will be kind of helpful in some way (i.e. I would like to return the favor).
For this blog site, I will post my understanding of computer and life as well. Things start :)