Recently, I need to do load tests for a set of REST APIs under Node.js environment. After studying some online articles, I decided to give it a go with AutoCannon. My scenario is that I have already had a sets of requests setup in Postman, and I do not want to rewrite everything for the load test. Luckily, I found a solution that exactly match my requirement (listed below), but I still want to write my own version from a AutoCannon fresher’s perspective and hopefully will be useful for future readers.

Step.1 Export A Collection of Requests from Postman

As shown below, left click the “…” button at the right side of the requests collection. Then choose “Export” in the popup menu.

Trulli
Fig.1 - Export Requests Collection
Afer this step, we should receive a JON file contains all the information of the REST APIs we would like to test.

Step.2 Write Code for Load Testing

We need to create a sepearte xxx.js file that tells AutoCannon what to do.

  • Load request data from exported JSON file
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
const autocannon = require('autocannon');
const fs = require('fs/promises');

// read array of items from exported .json file from postman
let entries = undefined;

async function getRequests() {
const data = await fs.readFile(
'./youtube_clone_proj.postman_collection.json',
'UTF-8'
);

entries = JSON.parse(data).item;
return true;
}
  • Set up AutoCannon for Each Request
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
entries.map(async (entry) => {
// there are multi request in item
entry.item.filter((ele) => {
// filter the empty request
return ele.request.url !== undefined;
}).map(async (ele) => {
console.log(ele.request.method + " " + ele.request.url.raw);
const result = await autocannon({
url: ele.request.url.raw,
method: ele.request.method,
connections: 100,
workers: 50,
duration: 5,
body: ele.request.body === undefined? null : JSON.stringify(ele.request.body),
// read other options here: https://github.com/mcollina/autocannon#autocannonopts-cb
}, finishedBench);

// track process
autocannon.track(result, {renderProgressBar: false});

// this is used to kill the instance on CTRL-C
process.once('SIGINT', () => {
result.stop()
})

function finishedBench (err, res) {
console.log('finished bench', err, res)
}
});

Launch the test

In the terminal window, run the following

1
node xxx.js

Then we should able to see output like this for each invidual request:

Trulli
Fig.2 - API1 result
Trulli
Fig.3 - API2 result
For sure, there are more details left to discover, e.g. settings of _autocannon_, but that is left for reading and searching the official document :)

reference

Benchmark express apis with autocannon from postman collection

Comment and share

Using Winston Logger in Node.js

in Technology, Nodejs Sitevisits: -- Pageviews: --

Winston logger is commonly used logger in Node.js. Most of the case, I use winston logger and also integration outputs of morgan into winston. I recalled that the first time I used winston, I followed some of the online tutorials regarding how to config and use it, but there are always problems here and there for my scenarios. Therefore, I decide to write this to blog to introduce how I config winston logger in my projects.

Project Structure

I normally put the configuration file of winston logger like the following:

1
2
3
4
5
6
7
8
9
10
Project
| ...
├───middleware
├───models
├───config
│ ├───winston.js
│ ...
├───app.js
├───package.json
| ...

Then in *.js files, it will use the winston config file like the following:

1
2
3
4
const winstonLogger = require('./config/winston');
...
winstonLogger.error(`${err.status || 500} - ${err.message} - ${req.originalUrl} - ${req.method} - ${req.ip}`);
...

Winston Configuration File Explained

The conent of the configuration file winston.js mentioned in the previous section is shown below:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
const appRoot = require('app-root-path');
const winston = require('winston');

const { format, transports } = winston;
const path = require('path');

const colors = {
error: 'red',
warn: 'yellow',
info: 'green',
http: 'magenta',
debug: 'white',
};

winston.addColors(colors);

// define the custom settings for each transport (file, console)
const logger = winston.createLogger({
level: 'http',
format: format.combine(
format.timestamp({
format: 'YYY-MM-DD HH:mm:ss',
}),
format.errors({ stack: true }),
format.splat(),
format.printf(
(info) => `${info.timestamp} ${info.level}: ${info.message}`,
),
format.json(),
),
defaultMeta: { service: 'quickpost' },
transports: [
new transports.File({ filename: path.join(appRoot.toString(), '/logs/error.log'), level: 'error' }),
new transports.File({ filename: path.join(appRoot.toString(), '/logs/combined.log') }),
],
});

if (process.env.NODE_ENV !== 'production') {
logger.add(new transports.Console({
format: format.combine(
// print all the message colored
format.colorize({ all: true }),
format.printf(
(info) => `${info.timestamp} ${info.level}: ${info.message}`,
),
format.simple(),
),
}));
}

logger.stream = {
write: (message) => logger.http(message),
};

module.exports = logger;

To explain the above sourcecode in details:

Set Colors for Logger Levels

1
2
3
4
5
6
7
8
9
10
const colors = {
error: 'red',
warn: 'yellow',
info: 'green',
http: 'magenta',
debug: 'white',
};

winston.addColors(colors);

Config Other Settings of Winston

For the following settings, please refer to the official site of winston logger. I would emphasize two things:

  • On the transports part. In the following configuration, I put loggers into files under the directory /log/*.log files. That is a very basic settings, you can of course adding settings like, generate new log files after the current log file reach to certain size, and further config organizing log files according to their generated date.

  • If not in “production” environment, I set the logger still output on the console for facilitating debugging.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
// define the custom settings for each transport (file, console)
const logger = winston.createLogger({
level: 'http',
format: format.combine(
format.timestamp({
format: 'YYY-MM-DD HH:mm:ss',
}),
format.errors({ stack: true }),
format.splat(),
format.printf(
(info) => `${info.timestamp} ${info.level}: ${info.message}`,
),
format.json(),
),
defaultMeta: { service: 'quickpost' },
transports: [
new transports.File({ filename: path.join(appRoot.toString(), '/logs/error.log'), level: 'error' }),
new transports.File({ filename: path.join(appRoot.toString(), '/logs/combined.log') }),
],
});

if (process.env.NODE_ENV !== 'production') {
logger.add(new transports.Console({
format: format.combine(
// print all the message colored
format.colorize({ all: true }),
format.printf(
(info) => `${info.timestamp} ${info.level}: ${info.message}`,
),
format.simple(),
),
}));
}

Merge Morgan into Winston

The following code will merge morgan into winston.

1
2
3
logger.stream = {
write: (message) => logger.http(message),
};
Config Morgan

To make winston working with morgan, we will need to add the following morgan settings:

  • File directory:
    1
    2
    3
    4
    5
    6
    7
    8
    Project
    | ...
    ├───middleware
    │ ├───morgan.js
    │ ...
    ├───app.js
    ├───package.json
    | ...
  • Config morgan:
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    const morgan = require('morgan');
    const logger = require('../config/winston');

    const morganMiddleware = morgan(
    // Define message format string (this is the default one).
    // The message format is made from tokens, and each token is
    // defined inside the Morgan library.
    // You can create your custom token to show what do you want from a request.
    'combined',
    // Options: in this case, I overwrote the stream and the skip logic.
    // See the methods above.
    { stream: logger.stream },
    );

    module.exports = morganMiddleware;

  • Use in app.js
    1
    2
    3
    4
    5
    6
    ...
    const morganMiddleware = require('./middleware/morgan');
    ...
    app.use(morganMiddleware);
    ...

If things go well …

You should be able to see logger in the console like the following:
Image

And also the same records can be find in the log files: /log/*.log files.

Comment and share

Background

While talking about TDD within developing web systems, we cannot avoid testing RESTful APIs. I have seen some turtorials online (e.g. the articles which discuss about how to do unit test for REST API via leveraging certain npm packages under Nodejs environment) which mention that for Unit Test (hereafter, UT) it will hit database.

Well, in my humble option, in UT we should not hit database. The core idea of unit test is “Isolation“, i.e. isolate the function/methods under testing, presume everthing that the function/methods will interact with works as expected, and see if the function/methods generate the result we are hoping for. Therefore, if hitting database in UT, the testing result of the UT will depend on the status of the database, the coupling between the function/methods and the database makes the test is not a real “Unit Test”. Tests that requires access database should categories as Integration Test (hereafter IT) or End-to-End test. In the rest of this article, I will explain in details regarding how to do UT and IT for RESTful APIs with sourcecode samples provided under Nodejs environment.

Integration Test for RESTful API under Nodejs Environment: using MongoDB and supertest as an example

Assume that we want to test the following REST API: “POST /api/auth/signup“, we have the following code in our project:

app.js

1
2
3
4
import authRouter from './routes/auth-route.js';
...
app.use('/api/auth', authRouter);
...

auth-route.js

1
2
3
4
5
6
7
8
...
import {
signup,
signin,
} from '../controllers/auth-controller.js';
...
router.post('/signup', signup);
...

To test the api in integration test, we will need to complete two steps:

  1. connect to database for testing purpose only, not database in production environment
  2. test whether the api output expected results

1. Set a Mongodb connection selector between production environment and testing environment

app.js

1
2
3
4
5
6
7
8
9
10
11
12
// According to the value of process.env, choose the connection between prod env and test env
const mongodbConnect = process.env.NODE_ENV === "test"?
process.env.MONGODB_CLOUD_TEST : process.env.MONGODB_CLOUD_PROD;

const dbConnect = async () => {
try {
await mongoose.connect(mongodbConnect);
winstonLogger.info(`Connect to mongodb of ${process.env.NODE_ENV}`);
} catch (err) {
winstonLogger.error(`Failed to connect to mongodb due to ${err}`);
}
};

2. Using supertest to complete the test

auth-route.test.js

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
describe('POST signup', () => {
const tmpUser = {
name: 'testaa',
email: 'testaa@gmail.com',
password: '1234'
};

it('post a new user and respond with 200 and a msg showes that user has been created', async () => {
// a real call to the database in testing environment
const { text, status } = await request(app)
.post('/api/auth/signup')
.set('Accept', 'application/json')
.send(tmpUser);

expect(status).to.be.equal(200);
expect(text).to.be.equal('User has been created!')
});
...
});

As it can be seen from the above code snippet, the supertest makes a call to the database for creating a user profile, and the user profile is actually created in the database under test environment. Since the test involving testing the connection with a real database, I would like to categories this type of tests as integration test.

Then, how the unit test for the REST API should look like?

Unit Test for RESTful API under Nodejs Environment: using MongoDB and supertest + nock as an example

To clearly tell the difference between Integration Test and Unit Test, in this part I will still use supertest as the agent (alternative option can be axios, for example) to file a request to the “POST /api/auth/signup“, but in Unit Test, I will not let the request hitting datacase via http server, instead, I use nock to intercept the request and return the expected result, as shown blow:

describe('POST signup', () => {
  const tmpUser = {
    name: 'testaa',
    email: 'testaa@gmail.com',
    password: '1234'
  };

  it('post a new user and respond with 200 and a msg showes that user has been created', async () => {

    // file http request to the REST API
    const signupUser = async () => {
      return await request('http://test.com')
        .post('/api/auth/signup')
        .set('Accept', 'application/json')
        .send(tmpUser);
    };

    // mock http server: check if the server received expected params in request.body, then reply expected response 
    nock('http://test.com')
      .post('/api/auth/signup', body => {
        expect(body.name).to.be.equal('testaa');
        expect(body.email).to.be.equal('testaa@gmail.com');
        expect(body.password).to.be.equal('1234');
        return true;
      })
      .reply(200, 'User has been created!');

    const { text, status } = await signupUser();

    expect(status).to.be.equal(200);
    expect(text).to.be.equal('User has been created!')
  });
  ...
});

In the above unit test, I use nock to mock a http server. The supertest still file a request, but the request will be intercepted by the mock server and return expected results, instead of reaching a real server and further hitting a real database. This is the way how isolation is achieved: I am not coupling the test with a real database, based on the assumption that all other parts works okay (via mocking), everything is completed within the test method.

A further questions may be asked is that what is point to do a Unit Test like this? Should I include the unit tests in my project? The answer is Yes and No.

  • No: if you are developing a relatively small system as software vendor for a small business, or you are structuring a new software product that are targeting a small number of users, not aiming for millions or even billions users in the future, then I would say do not bother to add such unit test in your system. The ROI is not worth it, keep the integration test that check the connection with real database is enough.

  • Yes: if you are developing a system either big or small in a big company, or if you have a software product that is aiming for millions of daily users in the future (which means the product will be owned by a big company), I would say you probably will need to add the unit tests in your system. The reason? I would quote a saying from a movie called <Kingdom of Heaven>: nothing, everything.

Comment and share

Academia Way vs Industrial Way of Getting Things Done

in Miscellaneous, Life Sitevisits: -- Pageviews: --

Behind the Blog

I have experiences of both working in academia (at PhD level) and industry (for some of the big names). The transform between the two was a lesson for me and I would like to share it with the ones who are in the situation of the following:

  • Do not know if should choose academia or industry after undergraduate study or receiving Master’s degree
  • Thinking of leaving adademia (after completing a PhD - Congratulations! That is a huge achievement already :), or quiting a PhD) and go to the industry, but do not know if it is the right choice.

Academia Way of Getting Things Done

Academia way of getting things done is to learn first (learn things related to your research area), then try to find the breakthrough. That means, you will need some time to do the preparations before actually begin to do something.

All the secrets behind getting success in acedemia, either as a PhD candicate or a PhD degree owner who are in the way of getting a tenure, to sum up in one sentence is: finding a“niche” (i.e. the so called breakthrough).

So how can I find the “niche” ? You may ask. In my experience, that is two steps work:

  1. First, you will need to build solid foudation in the researcing area. Basically, you will need to have a solid understanding of the common theories in the area you are working on. The stage can be completed by reading a grate a mount of publishes, books… It takes time and is a must step. Imaging you are presenting in a conference, in the Q&A session, someone critisize what you just said by quoting a famous theory in the area, you do not know much about the theory and then you do not know “how to fight back”…

  2. Second, after harnesss yourself with the knowledge you need after stage 1, then you can begin to do your actual work: try to find something new or “breakthrough” in the area that can work on.

In short, in academia, you will be allowed time to ramp up before doing something.

Industrial Way of Getting Things Done

The industrial way of achieving something is not like the academia way. There is very limited time for praparation, it is more like learning by practicing.

While working in the industry, you will have to move much faster, build something tangible that can be evaluate by stakeholders, collect feedbacks, then improve it, then re-evalueate … Therefore, there is no time to do decent preparation, you learn things in the process of solving problems. The key in the industry is to have something that can prove the feasibility of an idea as soon as possible and push to the market.

Conclusion

Thinking about how the GoF Design Patterns was discovered. If it was in Academia, the way of find the patterns would be: tons of readings, then propose some hypothesis, verify it via experiments and finally publish the findings (i.e. design patterns). However, in reality it was not like that, the GoF reviews the previous practices in software industry, then found those patterns. Exactly, the patterns were invented by software engineers in practice without a deliberate purpose (they were found by “variable”. some of most impactful things was invented by accidents, check Anti Fragile.

There are not right or wrong in these two ways of resolveing issues. Because they are in different contexts. In industry, most of the time, we are dealing with practical problem and need to see impacts right away; but in academia, we are dealing with the issues for the future in many case.

Comment and share

Background

I spent a few weeks to develop and set up my own website. The registered domain is root domain (i.e. example.com) and the DNS setting on GoDaddy is to redirect the root domain(example.com) to the www subdomain (e.g. www.example.com). The last step before publishing is to add SSL certificates to my root domain and www subdomain. I found a nice tutorial online regarding how to do it Let’s Encrypt on Heroku with DNS Domain Validation. However, it turned out that follow the intructions I could not generate certificate and private key for my www subdomain. I will show the issue and explain how I resolve it in details.

Trulli
Lovely "Lock":)

The Issue

My ultimate goal is to let https://www.example.com work on browsers. I have three custom domain in my Heroku app: *.exmaple.com, exmaple.com, www.example.com.

Following the instructions of Let’s Encrypt on Heroku with DNS Domain Validation, when Let’s Encrypt asked me to provide the domain name (as shown below), no matter I use example.com or www.example.com

1
2
Please enter the domain name(s) you would like on your certificate (comma and/or
space separated) (Enter 'c' to cancel):

later at the stage of applying the certificate to Heroku “sudo heroku certs:add –type=…/certificate …/privatekey”, the heroku will not provide domain name www.example.com as an option for the certificate but only example.com. This leads to the result of www.example.com is not secured.

Resolution

The resolution is easy but took me a lot of time to find it. The trick is using wildcard subdomain. At the stage of providing domain name to Let’s Encrypt, I used *.exmample.com, then later when I apply the generated certificate and private key to Heroku, it offers www.exmample.com as an option.

1
2
=== Almost done! Which of these domains on this application would you like this certificate associated with?
? Select domains www.jingjies-blog.com

Then check on Heroku, I can see the following:

Trulli
The wildcard certificate
Trulli
Secured www.jingjies-blog.com

On GoDaddy, change the redirect from http://www.example.com to https://www.example.com

Trulli
Set up Forwarding

Finally, I can see the “lock” on browser :)

Comment and share

Add View Counter to Hexo in Express

in Series, Building Blog, Technology, Nodejs Sitevisits: -- Pageviews: --

I planned to add a site visit counter and page view counter into my blog (Hexo + Express.js). And by searching online, it seems that there is very limited information can be found on how to add such things on a Hexo Blog where:

  • The Hexo blog is integrated into an existing Express server, i.e. the blog should be visited at express_site/blog
  • The theme I used for the blog does not have such feature added, means I cannot just add something to the _config.yml of the theme and the view counter will work as a charm. I will need to customize it.

There was several options that I had in mind:

  • Only customize Hexo blog and the theme –> I realize it is not feasible. I do not know how to let Hexo or the theme load customised .js file under Hexo + Express structure;
  • Load js from Express side –> I finally use this method. Let Express loading related js file as Express static files, the js file will add the data to the view counter in Hexo pages. Blow is what I got in the end.
    Trulli
    View counter in blog post

Now I will introduce how I achive it in details.

Set up view counter

  • Set up Firebase (There are tons of tutorial out there, so I am not going to repeat it:))

    There is only one thing I would like to emphasize is the database set up. The database should be Realtime Database, and the “rules” is set as below. I will explain in details later why this will not cause security issues.

Trulli
Firebase Realtime Databse Rules
  • After register and setting up the Firebase, add the following code to the hexo_blog_directory/themes/{your theme}/layout/xxx.ejs where you want to show the view counter.
1
2
<span id="site-visits">Sitevisits: <span class="count">--</span></span>
<span id="page-views">Pageviews: <span class="count">--</span></span>
  • Add the following .js code to express_proj/public/javascripts/:

    Note:

    • I used Firebase SDK 9
    • The details of database read/write APIs could be found at Firebase Database APIs
    • You could of course test data read / write sepeartely before using the whole block
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
import { initializeApp } from 'https://www.gstatic.com/firebasejs/9.8.3/firebase-app.js';
import {
getDatabase, ref, get, set, child,
// eslint-disable-next-line import/no-unresolved
} from 'https://www.gstatic.com/firebasejs/9.8.3/firebase-database.js';

const viewCounter = async () => {
// The web app's Firebase configuration
// For Firebase JS SDK v7.20.0 and later, measurementId is optional
const firebaseConfig = {
apiKey: 'your api key',
authDomain: 'your settings ',
databaseURL: 'xxx',
projectId: 'xxx',
storageBucket: 'xxx',
messagingSenderId: 'xxx',
appId: 'xxx',
measurementId: 'xxx',
};

// Initialize Firebase
const firebase = initializeApp(firebaseConfig);
const db = getDatabase(firebase, firebaseConfig.databaseURL);
const oriUrl = window.location.host;
const curUrl = oriUrl + window.location.pathname;

const readVisits = async (url, selector) => {
const dbKey = decodeURI(url.replace(/\/|\./g, '_'));
let count = 1;
const res = await get(child(ref(db), dbKey));
if (res.exists()) {
count = parseInt(res.val() || 0, 10) + 1;
}
await await set(ref(db, dbKey), count);
if (selector.length > 0) {
// eslint-disable-next-line no-param-reassign
selector[0].innerText = count;
}
};

readVisits(oriUrl, document.querySelectorAll('.post-meta #site-visits .count'));
if (curUrl && curUrl !== '_') {
readVisits(`page/${curUrl}`, document.querySelectorAll('.post-meta #page-views .count'));
}
};

viewCounter();
  • After the above steps, the view counter is ready to go :)

Extra issue: is it okay to expose Firebase apiKey?

When you see that I put apiKey etc. in a static .js file, you must have a question in mind: is that a security issue here? I had the same question. Then I found the following article: Is it safe to expose Firebase apiKey to the public?

The answer is a bit long, so I will sum up here: for a small size and not a critical site, it is okay to do thing like this. The apiKey here in Firebase is not like the traditional api keys we know, it is okay to expose it. All you need to do is to set up whitelist site for Firebase (that is why I mentioend earlier in the article that you could set the Firebase database rules for opening read/write public). Here is how you can do it in details:

  • Go to Firebase Console
  • Check Authentication Menu -> Sign-in method tab
  • Scroll down to Authorized domains
  • Click “Add Domain” button and add your domains (localhost is added as default, and I do not delete as it is needed for local testing)
Trulli
Set up whitelist site

The above should be okay for a small site. I am not going to over-engineering it, but you can always tailor the security rules accoriding to your business requirements :)

Comment and share

I have a Express server and recently I would like to integrate Hexo blog framework into the existing Express.js site, for example, make the blog accessable on express_site/blog. While exploring the feasibility, I realise that it is hardly to find a thorough tutorial of how to achieve it. There are some information can be found on:
Hexo Github Issue 3644 and x-heco-app-connect middleware. However, none of them offer a complete guide of seamlessly integrating Hexo blog into an Express server. For example, following the instruction provided by x-heco-app-connect middleware, the app will generate issues regarding routing. Therefore, I decide to write this blog to offer a tutorial of integrating Hexo into Express, i.e. make a Hexo blog as a route of existing Express server.

Trulli
Hexo under localhost:3000/blog

My way of achieving it is based on x-hexo-app-connect and some of the steps describe blow are based on the instruction provided by x-hexo-app-connect.

Getting Started

1.1 Make a sub-directory under the express project directory. e.g. we make it under experss_proj/blog, use this command under the project directory:

1
$ mkdir blog

1.2 Enter the command to install hexo-cli globally

1
$ sudo npm i hexo-cli -g

1.3 Go to the blog’s directory (created in step 1.1) and enter the following command in terminal to init Hexo:

1
$ hexo init

1.4 In the blog directory, enter the command in terminal to install x-hexo-app-connect package:

1
$ npm i x-hexo-app-connect --save

1.5 Create a index.js file in the blog’s root directory. e.g. if you want to make it using a bash terminal (Mac platform), enter this command below:

1
$ touch index.js

1.6 Fill in the index.js with the following code:

  • for the return part, it is to config the Hexo blog, the route is the most important, that is where you will configure how the Hexo blog should be visited from the original Express server. In my case, the blog will be visited via route express_site/blog
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    const Hexo = require('hexo');

    module.exports = (app) => {
    const blogDirPath = __dirname;
    const hexo = new Hexo(blogDirPath, {});
    // eslint-disable-next-line global-require
    return require('x-hexo-app-connect')(app, hexo, {
    // The Configs/Options
    log: false,
    compress: false,
    header: true,
    serveStatic: false,
    route: "/blog"
    });
    };
    1.7 In the app.js (the home js file of Express), add the following to use the x-hexo-app-connect in Express
    1
    2
    3
    4
    5
    6
    7
    const app = express();
    // Put it after app is initialized, as the app will be used as parameter
    const blogRouter = require('./blog/index')(app);

    ...
    app.use(blogRouter);

    1.8 This step is very important, it is to set “root” parameter in the _config.yml file under the root directory of Hexo. And set the root as /blog. If not set this root, the Hexo blog home page can be visited via express_site/blog, but when click the link of articles, categories, archives etc. the express will report 404 as the routes cannot be found
    1
    2
    3
    4
    # URL
    ## Set your site url here. For example, if you use GitHub Page, set url as 'https://username.github.io/project'
    url: http://localhost:3000/
    root: /blog # for locating themes static files (.css etc)
  • If do not set root as /blog, the following url will be generated as localhost:3000/all-categories which will lead to 404 error.
    Trulli
    ./blog/all-categories

1.9 Apply themes. You can just following the set up instruction of each specific theme, then it should be okay.
1.10 After all the above steps, run the Express server, if the terminal showes the following, then it means you are good:)

1
2
3
INFO  Validating config
INFO Start processing
INFO [Hexo-App-Connect] is running on route /blog

Hope this blog helps you in some way :)

Comment and share

My first blog

in Miscellaneous, Life Sitevisits: -- Pageviews: --

First Blog

It is Jingjie here, it is my first blog on my own blogging site.I have always been wanting to have a place where I can write and post my own blogs. instead of writing things here and there.

The reason of writing blog

I am a passionate software engineer and by nature of this area, I learned lots of thing online (i.e. I received help from others) and wrote down a great number of notes (mainly on my own OneNote) regarding the knowledge I acquired: for strengthening my understanding of thing learned and picking them up in the future.

And for all these days working on academia and industry, I have my own understanding of software, technology etc. I would like to share them with others, and hoping they will be kind of helpful in some way (i.e. I would like to return the favor).

For this blog site, I will post my understanding of computer and life as well. Things start :)

Trulli
Suprise of 2022:)

Comment and share

  • page 1 of 1
Author's picture

Jingjie Jiang


Find a place I love the most