Preface
This book is intended for newcomers and front-end developers looking to venture into back-end development, as well as back-end developers aiming to explore front-end development.
It aims to provide a broad overview of modern web development without delving too deeply into details.
You can read the book consecutively or choose individual topics that interest you.
There are numerous code examples, most of which are runnable. You are encouraged to engage with these examples hands-on, but it's also fine to skim through them.
Feng (@codemann) is a professional web developer with a passion for exploration and creation.
Preparation
This book assumes that you are using macOS. If you are using Linux or Windows, you will need to figure out how to install the required softwares by yourself, like VS Code and Docker.
Homebrew
On macOS, we use Homebrew to manage packages.
To install Homebrew itself, open Terminal (click on the Spotlight icon, type "terminal" and press Enter), paste the following command, and press Enter:
/bin/bash -c \
"$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
VS Code
We will use Visual Studio Code (VS Code) as our IDE. Its support for various languages, tools, and frameworks makes it a top choice for developers of all levels.
To install VS Code, run the following command:
brew install --cask visual-studio-code
Web Development
Web development refers to the processes involved in building websites and web applications. It encompasses both front-end (user interface) and back-end (server-side business logic and IO) development.
Front-End Development
-
HTML (HyperText Markup Language): The foundation of any web page, HTML provides the structure and content, defining elements such as headings, paragraphs, images, and links.
-
CSS (Cascading Style Sheets): CSS styles HTML elements, controlling the layout, colors, fonts, and overall appearance of a website.
-
JavaScript: This programming language adds interactivity to websites, enabling animations, user input handling, and dynamic content changes.
Back-End Development
-
Servers: Back-end development involves setting up and managing servers. With the emergence of serverless technology, many developers no longer need to handle this aspect themselves.
-
Databases: Databases store and manage the website's data, using tools like MySQL, PostgreSQL, and MongoDB. Back-end developers use programming languages such as Python, Ruby, or Java to interact with databases and perform data retrieval and storage.
-
Programming Languages: Popular languages includes JavaScript, Go, Ruby, Python, PHP and Java.
-
Frameworks: Back-end development often leverages frameworks to streamline the development process. Examples include Django and Flask for Python, Ruby on Rails for Ruby, and Spring for Java.
The Process of Web Development
- Define goals and target audience.
- Wireframing.
- Visual design.
- Frontend and backend development.
- Test for browser compatibility, functionality, performance, and security.
- Deployment.
- Monitoring and maintenance.
A Brief History of Web Development
Initially, HTML was used primarily for static documents, and the web was text-only. Around 1993, graphical web browsers emerged, allowing multimedia content to be combined with text on the same page.
The Common Gateway Interface (CGI) was introduced, and server-side
scripting languages like Perl and PHP gained popularity. With the
launch of HTML 2.0, the <form> element enabled users to submit data
to the backend, making the web into more dynamic.
In 1995, JavaScript was introduced, adding interactivity to web pages. A year later, CSS was created to enhance the presentation layer of web content.
By that time, hyperlinks and form submissions were the primary mechanisms for interacting with the server, often replacing the current page with another one. The advent of Ajax empowered JavaScript to perform asynchronous network operations, allowing applications to request data or HTML from the backend without refreshing the entire page. This methodology, still in use today, can be seen in libraries like pjax and htmx.
In 2006 jQuery was born, simplifying DOM manipulation, with ideas that later influenced the standard DOM API.
During the early 2000s, Flash became the primary technology for creating rich, interactive web content, including animations, games, and video playback. HTML5 was drafted as a response to Flash, enabling web applications to become more capable.
As web application be come more complex, various architectural paradigms emerged for both client-side and backend applications. On the backend, notable architectures and patterns included MVC and IoC. On the frontend, frameworks like Knockout.js (MVVM), Angular (MVVM), and Backbone (MVC) facilitated the development of large-scale, dynamic client applications.
In Year 2013, React was introduced, marking a new era in web development and leading to its current prominence.
The Command Line
The command line, is a text-based interface for interacting with your computer.
Build tools, version control systems and package managers (like Gradle git and npm) are primarily driven through the command line.
While graphical interfaces simplify certain tasks, a solid command-line foundation is essential for web developers to be efficient, effective, and adaptable in their work.
Basic Commands
To open Terminal, click on the Spotlight icon (magnifying glass) in your menu bar. Type "Terminal" and press Enter.
pwd: Where am I
Type pwd and press Enter will print current working directory.
ls: List files and directories
- To list files under working directory:
ls - To list hidden files:
ls -a
cd: Change directory
- To move to the Desktop:
cd ~/Desktop - To move to your home directory:
cd ~ - To move to the parent directory:
cd .. - To move back to the previous directory:
cd -
mkdir: Create a new directory
- To create a directory named "projects":
mkdir projects
rm: Remove a file
Be careful with rm, as it permanently deletes files.
- To remove a file:
rm myfile.txt - To remove a empty directory:
rmdir mydir - To remove a non-empty directory:
rm -rf mydir
mv: Move or rename a file or directory
- To move a file named "file.txt" to the "Documents" directory:
mv file.txt ~/Documents/ - To rename a file named "old.txt" to "new.txt":
mv old.txt new.txt
cp: Copy a file or directory
To backup a file named "file.txt": cp file.txt backup.txt
Tips and Tricks
Don't type a long command by hand, use Ctrl + R to search in the history.
There are also shortcuts for editing:
Ctrl + A: Moves the cursor to the beginning of the line.Ctrl + E: Moves the cursor to the end of the line.Ctrl + L: Clears the screen.
Stop the current command
Use Ctrl + C to interrupt the current command.
Sometimes you might need Ctrl + D to exit.
Running Multiple Tasks
Some commands are long-running tasks, such as a web server. To run another command, you typically need to open a new terminal.
Alternatively, you can use Ctrl + Z to suspend the current
command. After executing the new command, you can use fg to resume
the suspended command.
If you want to run multiple long-running tasks, you can use bg to
send the suspended command to the background. Once it's finished, use
kill to terminate it.
$ bun server.ts
[1] + 55407 suspended bun server.ts
$ bg %1
[1] + 55407 continued bun server.ts
$ kill %1
[1] + 55407 terminated bun server.ts
To start a command in the background, simply append an & to the
command, like this:
bun server.ts &
Another more intuitive solution is to use a terminal multiplexer, such
as tmux or screen.
HTML
HTML, or HyperText Markup Language, is the fundamental language used to create and structure web pages. It provides the basic building blocks for a webpage, such as headings, paragraphs, links, images, and other content. Originally designed for organizing and presenting documents, HTML has evolved and is now extensively used in web development to create both simple and complex applications. It works in conjunction with CSS (Cascading Style Sheets) for styling and JavaScript for interactivity, enabling developers to build rich, interactive web experiences.
Your First Web Page
Open Terminal, create a folder and start up VS code:
mkdir html
cd html
code .
In VS Code Press Command+Shift+X to enter Extension view and install
the "Live Server" extension.
Then Press Command+Shift+E to open File Explorer and create a new
html file index.html, paste the folowing code into the file:
<!DOCTYPE html>
<html>
<head>
<title>My Web Page</title>
</head>
<body>
<h1>h for header</h1>
<p>p for paragraph</p>
</body>
</html>
Then right-click on your HTML file and click "Open with Live Server". It will start a local server and automatically open your default browser.
VS Code is highly recommand here, but there are many other options to serve static content, such as using Python's built-in HTTP server. Simply run the following command to start the server in the current directory:
python3 -m http.server 8000
Then, visit http://localhost:8000 to view the webpage.
HTML Elments
Let's take a closer look at the code:
<!DOCTYPE html>: Declares the document type as HTML5(don't bother with previous versions).<html>: The root element of an HTML document, which has a tree like structure.<head>: Contains metadata about the webpage, such as the title and stylesheets.<title>: Sets the title of the webpage, displayed in the browser's tab.<body>: Contains the visible content of the webpage.<h1>: Defines the most important heading on the page, there are also h2, h3 till h6.<p>: Defines a paragraph of text.
Basic Structure of an Element
start tag end tag
| |
.-------+-------. .-+-.
| | | |
"<p class='demo'>This is a paragraph.</p>"
| | | |
'-----+----' '----------+--------'
| |
attribute content
An element consists of a start tag, content, and an end tag. The start
tag specifies the element's name and contains attributes. The end tag
contains a / before the element name. The content can be text or other
elements.
Some elements are self-closing and do not require an end tag or content:
<img src="image.jpg" alt="A beautiful image">
<div>
The <div> tag, short for "division", does not convey any semantic
meaning about the content it wraps; it is essentially a "block-level"
element used to group other elements together for styling with CSS or
for scripting with JavaScript. It is highly versatile and is
frequently used to create layout structures or sections within a
webpage. However, for better accessibility and SEO, it's recommended
to use semantic HTML tags (like <header>, <footer>, <article>,
and <section>) where appropriate, as they provide more meaningful
context about the content.
Accessibility
Web accessibility ensures that websites and web applications are usable by people with disabilities. It involves creating content that is accessible to everyone, regardless of their abilities.
To enhance accessibility:
-
Use Semantic HTML: Utilize appropriate HTML elements (e.g.,
<header>,<nav>,<main>,<footer>) to convey the content structure. -
Provide Descriptive Alt Text: Include meaningful alt text for images to communicate their purpose to visually impaired users.
-
Implement ARIA Attributes: Use Accessible Rich Internet Applications (ARIA) attributes to convey additional context to assistive technologies like screen readers.
Example:
<button aria-label="Play">
<img src="play-button.png">
</button>
In this example, the aria-label="Play" provides a text label for the button, which assists screen reader users.
Other Common ARIA Attributes:
role="button": Indicates that an element should be treated as a button.aria-disabled="true": Signals that an element is disabled.aria-expanded="true": Indicates that an element is expanded (e.g., for a collapsible section).aria-selected="true": Shows that an item is selected (e.g., within a list).
SVG
SVG, or Scalable Vector Graphics, is an XML-based format for creating two-dimensional vector graphics. Unlike raster images, SVGs maintain high quality at any size, making them ideal for web design and responsive layouts.
With SVG, you can create shapes, paths, and text that are easily manipulated through CSS and JavaScript, allowing for dynamic and interactive graphics. It’s widely supported across modern web browsers, making it a powerful tool for developers and designers looking to enhance visual content on the web.
Syntax
While complex graphics are usually created using design tools like Illustrator and Inkscape, it's possible to write SVG by hand, and it can be embedded directly in HTML:
shape
<svg width="100" height="100">
<circle cx="50" cy="50" r="40" stroke="black" stroke-width="2" fill="yellow" />
</svg>
text
<svg width="200" height="100">
<rect width="200" height="100" fill="indigo" />
<text x="100" y="75"
font-family="Verdana"
font-size="50"
fill="white"
text-anchor="middle">SVG</text>
</svg>
path
<svg width="200" height="150">
<path d="M 10 80 C 40 10, 65 10, 95 80 S 150 150, 180 80"
stroke="indigo" fill="transparent" stroke-width="2" />
</svg>
The <path> element is a versatile component used to create complex
shapes and lines. It allows you to define a shape by specifying a
series of commands and parameters in a single attribute. Here is a
breakdown:
-
M (move to): Moves the starting point to a specified coordinate without drawing a line.
-
C (cubic Bézier curve): Draws a cubic Bézier curve from the current point to a specified point, using two control points.
-
S (smooth cubic Bézier curve): Similar to C, but the first control point is inferred from the previous curve.
SVG for Application Development
SVG can be generated and manipulated programmatically, making it suitable for creating data visualizations, charts, and other dynamic graphics based on data inputs. Libraries such as D3.js and SVG.js simplify SVG manipulation, event handling, and data visualization.
CSS
Cascading Style Sheets (CSS) is a language used to control the presentation and layout of HTML documents. While HTML provides the structure and content of a webpage, CSS is responsible for the visual appearance.
Inline Styles
Inline styles are defined directly on an HTML element using the style attribute. They override any external or internal CSS.
Let modify our web page:
<!DOCTYPE html>
<html>
<head>
<title>My Web Page</title>
</head>
<body>
<h1>Hello, World!</h1>
<p style="color: white; background: blue;">This is a paragraph of text.</p>
</body>
</html>
Inline CSS lives in the style attribute in the form of properties: values;.
Properties determine the style attributes of the element, like color, font-size, background, etc.
Values specify the desired value for each property, such as red,
16px, or url("image.jpg").
Mixing styles with content looks messy? And, what if you have multiple paragraphs and want to style them consistently?
External Stylesheets
CSS can be separated from HTML. Create a separate file styles.css to
house your styles. Link it to your HTML using the <link> tag:
<link rel="stylesheet" href="styles.css">
Now you need a way to specify which elements you want to style. That's where selectors come into play.
Selectors
CSS selectors are patterns used to select the elements.
Basics
Element Selector
Selects all instances of a specific HTML element. To style all paragraphs:
p {
color: white;
background: blue;
}
ID Selector
Selects a single element with a specific ID. Use a hash (#) before the ID name. An ID is unique, while a class is not. To avoid conflicts, IDs are rarely used.
#app {}
Class Selector
Selects elements with a specific class. Use a period . before the class name.
Class selectors are the most used ones.
.my-class {}
Attribute Selector
Selects elements based on their attributes. To select all text input:
input[type="text"] {}
Pseudo-class Selector
Pseudo-class selector starts with :.
state
:hover, :focused, :disabled, :focus-within
To select all anchors with mouse cursor on it:
a:hover {}
lang
:lang
<p lang="de"></p>
p:lang(de) {}
first and last
First child:
article *:first-child {}
First child and it is is a paragraph:
article p:first-child {}
First paragraph:
article p:first-of-type {}
Similary, this are last-child and last-of-type.
nth
p:nth-child(n) {}
p:nth-of-type(2n + 1) {}
p:nth-last-type(1) {}
n: 0 1 2 3 ...
2n + 1: 1 3 5 ...
not
p:not(:first-child) {}
Pseudo Elements
Pseudo element selectors starts with double colons :::
p::first-letter {}
p::first-line {}
p::before {}
p::after {}
input::placeholder {}
dialog::backdrop {}
::selection {}
Combining Selectors
Selectors can also be combined in several ways:
Grouping Selectors
You can group multiple selectors that share the same styles by separating them with a comma.
h1, h2, h3 {
color: green;
}
Escendant Selector
This selector targets elements that are nested within a specified parent element.
div p {
color: blue; /* Applies to all <p> elements inside any <div> */
}
Child Selector
The child selector > selects elements that are direct children of a specified parent.
ul > li {
list-style-type: square; /* Applies only to <li> that are direct children of <ul> */
}
Adjacent Sibling Selector
The adjacent sibling selector + selects an element that is immediately following another specified element.
h1 + p {
margin-top: 0; /* Applies to the first <p> that comes directly after an <h1> */
}
Combining Class and Element Selectors
You can combine class selectors with element selectors to target specific elements with certain classes.
button.primary {
background-color: blue;
}
The Box Model
The CSS box model describes the structure of elements. Each box consists of:
- Margin: Space outside the border that separates the element from others.
- Border: A line surrounding the padding.
- Padding: Space between the content and the border.
- Content: The actual content of the box (text, images).
+----------------------+
| Margin |
| +----------------+ |
| | Border | |
| | +------------+ | |
| | | Padding | | |
| | | +--------+ | | |
| | | |Content | | | |
| | | +--------+ | | |
| | +------------+ | |
| +----------------+ |
+----------------------+
.box {
width: 100px;
height: 100px;
padding: 10px;
border: 10px solid black;
margin: 10px;
}
What does the width and height mean in above example?
box-sizing
The box-sizing property in CSS controls how the width and height of
an element are calculated. There are two values:
content-box(default): Width and height only include the content,
excluding padding and borders. This can lead to elements being larger
than expected when adding padding or borders.
border-box: Width and height include content, padding, and
borders. This makes layout more predictable, as you specify the total
size without worrying about extra space from padding or borders.
Use the DevTools
Browsers come with built-in developer tools that help in debugging and optimizing websites. These tools can inspect HTML and CSS, analyze network requests, and test performance.
To work with the DOM or CSS, right-click an element on the page and select Inspect to jump into the Elements panel.
Shorthand Properties
CSS shorthand properties are a way to combine multiple CSS properties into a single declaration. This can make your CSS code more concise and easier to read.
Take margin as an example:
margin-top: 10px;
margin-right: 10px;
margin-bottom: 10px;
margin-left: 10px;
is the same as:
margin: 10px;
You can specify 1 to 4 values:
margin: [top and bottom] [right and left];
margin: [top] [right] [bottom] [left];
margin: [top] [right and left] [bottom];
There are many other shorthand properties available, such as padding, border, font, and background.
Positioning
CSS positioning allows you to control the layout of elements on the page.
- static: Default position; elements are positioned according to the normal flow.
- relative: Positioned relative to its normal position.
- absolute: Positioned relative to the nearest positioned ancestor.
- fixed: Positioned relative to the viewport; stays in place when scrolling.
- sticky: Toggles between relative and fixed, based on scroll position.
static
<div class="parent">
<div class="box1">Box 1</div>
<div class="box2">Box 2</div>
<div class="box3">Box 3</div>
</div>
Parent Box
+-----------------+
|+---------------+|
|| Box 1 ||
|+---------------+|
|┌---------------┐|
|| Box 2 ||
|+---------------+|
|┌---------------┐|
|| Box 3 ||
|+---------------+|
+-----------------+
absolute
<div class="parent">
<div class="box1">Box 1</div>
<div class="box2">Box 2</div>
<div class="box3">Box 3</div>
</div>
.parent {
position: relative;
height: 100px;
}
.box1 {
position: absolute;
top: 50%;
left: 50%;
transform: translate(-50%, -50%);
}
.box2 {
position: absolute;
bottom: 0;
right: 0;
}
Parent Box
+-----------------+
|+---------------+|
|| Box 3 ||
|+---------------+|
| |
| +----------+ |
| | Box 1 | |
| +----------+ |
| |
| +----------+|
| | Box 2 ||
| +----------+|
+-----------------+
Parent Box is the containing element. It establishes the context for absolutely positioned elements.
Box 1 and Box 2 are positioned absolutely within the parent box.
The top left corner of Box 1 is first posioned at the center of its
Parent Box, then translate shifts it back by half of its own width
and height, effectively centering it around that point.
Box 3 is the only static element, since absolute elements are taken
out of the normal document flow, it goes to the top.
Flexbox
Flexbox is a layout model that allows you to design a one-dimensional layout easily.
<div class="container">
<div class="item">1</div>
<div class="item">2</div>
<div class="item">3</div>
</div>
+-----------------------------+
|1 |
+-----------------------------+
|2 |
+-----------------------------+
|3 |
+-----------------------------+
Flexbox is horizontal by default, use flex-direction: column; to layout vertically.
.container {
display: flex;
height: 5em;
}
+---------+---------+---------+
|1 |2 |3 |
+---------+---------+---------+
| |
| |
+-----------------------------+
justify-content controls how the children is distributed on the main axis(horizontal):
.container {
display: flex;
height: 5em;
justify-content: flex-end;
}
+----------+------+------+-----+
| |1 |2 |3 |
| +------+------+-----+
| |
| |
+------------------------------+
.container {
display: flex;
height: 5em;
justify-content: space-between;
}
+------+----+------+----+------+
|1 | |2 | |3 |
+------+ +------+ +------+
| |
| |
+------------------------------+
align-items controls how the children is distributed on the cross axis(vertical):
.container {
display: flex;
height: 5em;
justify-content: space-between;
align-items: center;
}
+------------------------------+
| |
+------+ +------+ +------+
|1 | |2 | |3 |
+------+ +------+ +------+
| |
+------------------------------+
To explore the full power of flexbox, check out Flex Cheatsheet.
There is also a game to test your flexbox skills.
Grid Layout
CSS Grid Layout is a powerful two-dimensional layout system.
Like Flexbox, it consists of two parts: the container and the items.
You define a grid template on the container, then place the items onto the grid.
Defining the container
Given following HTML structure:
<div class="container">
<div class="item1">1</div>
<div class="item2">2</div>
<div class="item3">3</div>
</div>
Your can define the container's grid template with
grid-templates-rows and grid-template-columns, or use
grid-template to combine the two rules to be more concise.
.container {
display: grid;
grid-template-columns: 1fr 2fr;
}
grid-template-columns defines the width of each column. fr stands for fraction, other units like px and em can also be used.
+-----------+------------------+
|1 | 2 |
+-----------+------------------+
|3 | |
+-----------+------------------+
Placing the items
You can use grid-column-start, grid-column-end or grid-column to
defined the position and size of the item in the grid.
.container {
display: grid;
grid-template-columns: 1fr 2fr;
}
.item1 {
grid-column-start: 2;
}
+---------+-------------------+
| |1 |
+---------+-------------------+
|2 |3 |
+---------+-------------------+
.item1 {
grid-column-end: 3;
}
.item1 {
grid-column-end: span 2;
}
those two does the same thing:
+-----------------------------+
|1 |
+---------+-------------------+
|2 |3 |
+---------+-------------------+
Ascii art style grid template
grid-template-areas allows you to defined the template in an ascii art style.
.container {
grid-template-areas:
"a a a"
"b . c";
}
.item1 {
grid-area: c;
}
.item2 {
grid-area: b;
}
.item3 {
grid-area: c;
}
+----------------------------+
|3 |
+---------+--------+---------+
|2 | |1 |
+---------+--------+---------+
Alignment
Align columns
justify-content aligns columns horizontaly, similar to justify-content in a horizontal flexbox layout:
.container {
display: grid;
grid-template: repeat(2, 100px) / repeat(3, 100px);
justify-content: space-between;
}
+----+------+----+-------+----+
|1 | |2 | |3 |
+----+ +----+ +----+
|4 | |5 | |
+----+------+----+------------+
Align rows
align-content aligns rows verticaly, similar to align-content in a mutli-row horizontal flexbox layout:
.container {
display: grid;
grid-template-columns: repeat(3, 100px);
height: 200px;
align-content: center;
}
+-----------------------------+
+-----+-----+-----+ |
|1 |2 |3 | |
+-----+-----+-----+ |
|4 |5 | |
+-----+-----+ |
+-----------------------------+
Align inside grid cells
Given a 2x2 grid:
.container {
display: grid;
grid-template: repeat(2, 1fr) / repeat(2, 1fr);
}
justify-items and justify-self aligns the content horizontaly,
justify-items is applied on the container, justify-self on the
children:
.item {
justify-self: end;
}
+-------+-----+-------+-----+
| |1 | |2 |
| +-+-----+ +-----+
| |3 | |
+-----+-------+-------------+
Similarly align-items and align-self align the content verticaly:
.container {
display: grid;
grid-template: repeat(2, 100px) / repeat(2, 1fr);
align-items: end;
}
+---------------------------+
| +-------------+
+-------------+ +
|1 |2 |
+-------------+-------------+
| |
+-------------┐ |
|3 | |
+-------------+-------------+
Tips to remember thoses rules
There are a total of 6 rules (2 prefixes multiplied by 3 suffixes):
-
justify-*: Used to align horizontally(main axis). -
align-*: Used to align vertically(cross axis). -
*-content: Applies to rows or columns. -
*-items: Applies to items within their cells. -
*-self: Applies to a single item within it's cell.
More
Play a game to master grid layout.
There is also a Grid Cheatsheet.
Transition
A trasition is usually trigger by user interaction like hover or
focus, it can also be triggered programmatically using javascript by
adding or removing classes.
A transition allows you to change property values smoothly over a specified duration.
button {
background-color: gray;
transition: background-color 0.3s ease-in-out;
}
button:hover {
background-color: red;
}
Transform multiple property values:
button {
background-color: gray;
transition: all 0.3s ease-in-out;
}
button:hover {
background-color: red;
transform: scale(1.5);
}
Animation
CSS animation allows you to create more complex sequences of transitions by defining keyframes. It can run continuously or a set number of times.
@keyframes beat {
0% { transform: rotate(-45deg) scale(1); }
50% { transform: rotate(-45deg) scale(1.5); }
100% { transform: rotate(-45deg) scale(1); }
}
.heart {
animation: beat 1s linear infinite;
}
.heart {
width: 50px;
height: 50px;
background-color: red;
transform: rotate(-45deg);
margin: 50px;
}
.heart::before, .heart::after {
content: '';
position: absolute;
width: 50px;
height: 50px;
border-radius: 50%;
background-color: red;
}
.heart::before { left: 25px; }
.heart::after { top: -25px; }
Variables
CSS variables or custom properties are property names prefixed with
--, their values can be used in other declarations using the var()
function. CSS variables are scoped to the element they are declared
on.
Define variables on the :root pseudo-class, so that it can be referenced globally:
:root {
--primary-color: #0000FF;
--secondary-color: #DDDDDD;
}
Access variables with var():
body {
background-color: var(--primary-color);
}
button {
background-color: var(--secondary-color);
}
var() also accepts a default value, in case the variable is not defined:
button {
background-color: var(--secondary-color, blue);
}
Responsive Design
Responsive design in CSS is a technique for ensuring that websites look and function optimally on various devices, from desktop computers to smartphones. This can be achieved by using flexbox, grid layout, relative CSS unites and CSS media queries.
CSS Media Queries
Css Media queries are conditions that can be applied to different screen sizes, orientations, and resolutions.
Use media queries to apply different styles based on screen size:
@media (max-width: 600px) {
.container {
flex-direction: column; /* Stacks items vertically on small screens */
}
}
CSS Units
Use relative units like em or rem for font sizes and spacing. This
allows elements to scale proportionally with the screen size.
Example:
h1 {
font-size: 2em;
}
There are two main categories of CSS units.
Relative Units
These units are relative to the size of the parent element or the viewport. This makes them responsive and adaptable to different screen sizes. Examples of relative units include:
rem: Relative to the root element's font size.em: Relative to the font size of the parent element.vw:Viewport width (1vw = 1% of the viewport width).vh: Viewport height (1vh = 1% of the viewport height).vmin: The smaller of vw and vh.vmax: The larger of vw and vh.
Absolute Units
These units are fixed and do not change based on the size of the parent element or the viewport. Examples of absolute units include:
px: Pixels.pt: Points (1pt = 1/72 of an inch).in: Inches.cm: Centimeters.mm: Millimeters.
clamp
clamp(min, preferred, max)
The preferred value often uses a relative uinite like vw, %, em,
etc. clamp() ensures the size stay within min and max.
.container {
width: clamp(300px, 50vw, 800px);
}
In this case, the width of the .container will be 50% of the
viewport width, if that falls within the bounds (300px to 800px),
other wise it will be 300px or 800px.
Real World CSS
In real-world applications usually you can't just write CSS. There are several challanges:
- Class name conflicts
- Browser compability(you can't use latest features)
- Code reuse
Besides those, there are also efforts to provide you a better than CSS language or a higher level abstraction.
CSS Modules
CSS Module solves the scoping problem. It rewrite your class name selectors to avoid conflicts:
import styles from './style.css';
function component() {
return <div className={styles.myClass}></div>
}
PostCSS
PostCSS supports variables and mixins, transpiles future CSS syntax, inlines images, and more.
Alternative CSS languages
Less, Scss, Sass
They extend CSS with features like variables, mixins, and functions.
CSS-in-JS
Write CSS within JavaScript, leveraging JavaScript's programmatic capabilities.
Solutions require a runtime: styled-components, Emotion.
Solutions with no runtime overhead: Vanilla extract
Frameworks
Bootstrap: provide pre-built components and styles.
Tailwind: offer extensive pre-built utility classes.
JavaScript
JavaScript carries a lot of historical burdens due to browsers maintaining backward compatibility, which prevents the removal of old features. Meanwhile, new features continue to be added, as it is the most widely used programming language on earth. As a result, the language can feel chaotic.
Don't be afraid. JavaScript is a reasonable language if you only use the good parts.
Where to Run JavaScript
One option is to use a Browser Console. In Chrome, you can open the
JavaScript console by pressing Cmd + Opt + J on Mac or Ctrl + Shift + J on Windows.
Another option is to use Node.js REPL. Install Node.js using folowing command:
brew install node
Then start the REPL:
$ node
Welcome to Node.js v20.6.1.
Type ".help" for more information.
>
To exit, press Ctrl + D or Ctrl + C .
Basic Syntax
Variable
const variables can't be reassigned, which makes code easier to read.
let allows for reassignment. There is also var, don't use it.
let name = "Alice"
const age = 7
Array
const fruits = ["apple", "banana", "cherry"]
fruits
.filter(x => x.length < 6)
.map(x => x.toUpperCase()) // ["APPLE"]
fruits.slice(0, 2) // ["apple", "banana"]
Object
const obj = {name: "Alice", age: 7}
Object.fromEntries([["name", "Alice"], ["age", 7]])
Object.entries(obj)
Function
function greet(name) {
return `Hello, ${name}!`
}
Calling a function
greet("JS")
greet.call(null, "JS")
greet.apply(null, ["JS"])
The first argument to call and apply is the this context:
function greet() {
return `Hello, ${this.name}!`
}
greet.call({name: "JS"})
this can also be specified this way:
function greet() {
return `Hello, ${this.name}!`
}
const alice = {name: "Alice", greet}
alice.greet()
Arrow Function
The most significant difference between arrow functions and
traditional function is their lexical this binding. Unlike
traditional functions, arrow functions do not create their own this
context. Instead, they inherit the this value from the enclosing
scope. This is particularly useful in scenarios like callbacks.
const greet = (name) => {
return `Hello, ${name}!`
}
Arrow functions with a single expression implicitly returns the value of that expression:
const greet = name => `Hello, ${name}!`
Class
class Person {
constructor(name, age) {
this.name = name
this.age = age
}
greet() {
console.log(`Hello, my name is ${this.name}`)
}
}
const person = new Person("Alice", 7)
person.greet()
Destructuring
const person = { name: "Alice", age: 30 }
const { name, age } = person
const [first, ...rest] = [1, 2, 3]
Spread
const arr1 = [1, 2, 3]
const arr = [...arr1, 4, 5]
const dict1 = { key: "val" }
const dict = { ...dict1, key2: "val2" }
Asynchronous Programming
Asynchronous programming in JavaScript is typically done using callbacks, event pub/sub, coroutines, streams, and Promises with async/await.
In modern JavaScript, Promises and async/await are preferred for their improved readability and maintainability.
Creating a Promise:
function delay(milliseconds) {
return new Promise((resolve, reject) => setTimeout(resolve, milliseconds))
}
Comsume a promise:
delay(1000).then(result => {
console.log('one second passed')
}).catch(error => {
console.error(error)
})
Or use async/await:
async function myFunction() {
await delay(1000)
await op1()
await delay(1000)
await op2()
}
This is way better than Callbacks:
function myFunction(callback) {
setTimeout(() => {
op1().then(() => {
setTimeout(() => {
op2().then(() => callback()).catch(callback)
}, 1000)
}).catch(callback)
}, 1000)
}
Module
Modules enable you to break your code into pieces, making it easier to maintain and scale applications. They are essential components of a programming language. In JavaScript, the situation was more complicated. Fortunately, it has now settled down to ES modules(ESM).
ES Modules
ESM is pretty stright forward:
myModule.js
export const pi = 3.14159
export function greet(name) {
console.log(`Hello, ${name}!`)
}
main.js
import { pi, greet } from './myModule'
console.log(pi)
greet("Alice")
When used in browser:
<script type="module" src="main.js"></script>
CommonJS
CommonJS is used in Node.js, Here is how CommonJS works:
const pi = 3.14159
function greet(name) {
console.log(`Hello, ${name}!`)
}
module.exports = { pi, greet }
const { pi, greet } = require('./myModule')
The differences between CommonJS and ES Modules extend beyond syntax. CommonJS loads modules synchronously, which complicates browser support, and it allows for dynamic require, making tree-shaking and static analysis more challenging.
If you're starting a new project, it's advisable to use ES Modules instead.
Bundling
While most modern browsers and runtimes support ES Modules, you might need to use a build tool like Webpack , esbuilt or Rollup to bundle your modules for older browsers.
They are pretty stright forward to get started, take esbuild as an example:
esbuild app.js --bundle --outfile=dist.js
Building processes are typically handled by frameworks or project scaffolding, so you don't usually need to worry about them.
Git
Git, initially developed by Linus Torvalds for Linux development, is a distributed version control system that tracks file changes over time. It is essential for developers to collaborate effectively and maintain a history of their work.
Repository, Commit, and Remote
-
Use
git initto create a new repository, which adds a.gitfolder to the current directory. -
For existing repositories, clone with
git clone git@github.com:<namespace>/<project>.git. -
Run
git statusto check the status of files in the repository:- Untracked: New files not added yet.
- Modified: Tracked files that have been altered but not staged.
- Added: Files staged for commit.
- Ignored: Files specified in
.gitignorethat won't show up ingit status.
-
Use
git add <file>to stage a specific file for commit, orgit add .to stage all modified and untracked files. -
Create a commit with
git commit -m 'message', which cleans the workspace by committing staged changes. -
Add a remote repository with
git remote add origin git@github.com:<namespace>/<project>.git, designating it as "origin." -
Push commits to the "main" branch with
git push -u origin main, setting "origin" as the default remote. -
Use
git pullto fetch and merge changes from a remote repository into the current branch.
It is a tree
When you run git init, you create the initial node, or root of the tree.
-
Use
git branch -b my-branchto create a new branch named "my-branch." -
Each commit, executed with
git commit -m 'message', adds a new node to that branch. -
Branches can be created from existing branches.
-
Switch between branches with
git checkout <branch>.
Merging
Typically, you develop code on a private branch. Once complete, push it to the remote and start a "Pull Request" to request merging into the main branch. Resolve any conflicts beforehand.
Conventional Commit
Conventional Commits is a standardized format for commit messages that helps convey the intent and impact of changes. This consistency aids developers in understanding commits during reviews or while analyzing project history.
<type>(<scope>): <subject>
<body>
Examples:
git commit -m \
"feat(api): send an email to the customer when a product is shipped"
Commonly used types:
- feat: A new feature
- fix: A bug fix
- refactor: A change that neither adds a feature nor fixes a bug
- test: Adding or refactoring tests
- docs: Documentation changes
- chore: Other updates (e.g., tooling, configuration)
Git Is a Powerful Tool
Here is more to explore:
- Aliases
- Rebasing
- Tags
- Stashing
- Cherry-picking
- Workflows
- Handling large files
- Integration with IDEs
Semantic Versioning
Semantic Versioning is a specification for version numbering. It uses
a format of <AMAJOR>.<MINOR>.<PATCH>:
- MAJOR: Indicates a breaking change to the API.
- MINOR: Indicates a new feature added to the API that is backward-compatible.
- PATCH: Indicates a bug fix or other backward-compatible changes.
Examples:
- 1.0.0: Initial stable release.
- 1.0.1: A patch release to fix bugs or make minor improvements.
- 1.1.0: A minor update with new features.
- 2.0.0: A major update with breaking changes.
SemVer provides a clear and consistent way to communicate the significance of changes to users and developers.
Many build and release tools can automatically determine the next version number based on commit messages(like feat or fix) that follow SemVer conventions.
But utimately semantic versioning is just an intention of the developer. it cannot guarantee that a patch release will be completely free of breaking changes. That's the reason we lock our dependencies.
Frontend Development
Frontend development focuses on creating the visible and interactive aspects of web applications—essentially everything users see and engage with directly.
Key Components of Frontend Development:
- HTML (Hypertext Markup Language): Structures the content of a webpage, defining elements such as headings, paragraphs, images, and links.
- CSS (Cascading Style Sheets): Styles HTML elements to control layout, colors, fonts, and overall visual presentation.
- JavaScript: Introduces interactivity and dynamic content, enabling features like animations, form validation, and real-time updates.
- Frontend Frameworks and Libraries: Tools like React, Angular, and Vue.js enhance development efficiency by offering pre-built components and state management solutions.
Responsibilities of Frontend Developers:
- User Experience (UX): Designing intuitive and engaging interfaces that cater to user needs.
- User Interface (UI): Creating visually appealing and cohesive layouts.
- Responsiveness: Ensuring compatibility across various devices and screen sizes.
- Accessibility: Making content usable for individuals with disabilities.
- Performance Optimization: Enhancing load times and overall site efficiency.
Web APIs
Web APIs are a set of built-in objects and functions that allow JavaScript to interact with the browser environment. These APIs provide functionality beyond the core JavaScript language itself, enabling developers to create dynamic, interactive web applications.
Server-side runtimes like Deno are also using Web APIs over proprietary APIs. This brings multiple benefits, like easier code sharing between client and server, and browser development experience translates directly to the server.
fetch
One of the examples is the fetch API. It's used to make HTTP requests. It is implemented as specified in the WHATWG fetch spec.
It can be used in mutiple modern javascript runtimes.
async function demo() {
const response = await fetch('https://httpbin.org/get')
console.log(response.ok)
console.log(await response.json())
}
To make a POST request:
async function demo() {
const response = await fetch('https://httpbin.org/post', {
method: 'POST',
headers: {
'content-type': 'application/json'
},
body: JSON.stringify({payload: 1})
})
console.log(response.ok)
console.log(await response.json())
}
More
A comprehensive list of Web APIs can be found on MDN.
DOM API
The Document Object Model (DOM) is a crucial component of the Web API. It represents an HTML document as a tree structure of objects, providing a programmatic interface to interact with the document's content, structure, and style.
The DOM is composed of nodes, which can be elements (like <div>, <p>,
etc.), text nodes, attributes, comments, and more.
Nodes are organized in a hierarchical structure, with the document element at the root.
Nodes have properties and methods that allow you to access and manipulate their content, attributes, and relationships with other nodes.
The DOM also enables event handling, allowing you to respond to user interactions (like clicks, key presses) and other events.
Selecting Elements
Remember css selector?
document.querySelectorAll('.my-class');
const element = document.querySelector('#myid);
Modifying Elements
element.innerHTML = "New content";
element.style.color = "red";
element.classList.add('active');
Creating Elements
const el = document.createElement('p');
el.textContent = "This is a paragraph";
document.body.appendChild(el);
Event Handling
el.addEventListener('click', () => {
console.log('Element clicked');
});
MDN
Detailed documentation can be found on MDN.
Canvas
The Canvas API is also a part of the Web APIs. It allows for dynamic, scriptable rendering of 2D shapes and bitmap images.
It provides a drawing surface where you can create graphics on the fly using JavaScript. With the Canvas API, you can draw shapes, text, and images, as well as manipulate pixels directly. This makes it ideal for applications like games, animations, and data visualizations.
The API is accessible through the <canvas> HTML element:
<!DOCTYPE html>
<html>
<head>
<title>Canvas</title>
</head>
<body>
<canvas id="canvas" width="300" height="250"></canvas>
<script src="script.js"></script>
</body>
</html>
In script.js:
const canvas = document.getElementById('canvas')
const ctx = canvas.getContext('2d')
ctx.fillStyle = 'blue'
ctx.fillRect(50, 50, 150, 100)
Canvas vs SVG
SVG (Scalable Vector Graphics) represents graphics as DOM elements. This allows you to leverage standard HTML event handling, making it easier to manage user interactions like clicks, hovers, and other events.
On the other hand, Canvas is a bitmap-based approach where you draw pixels directly, meaning you need to handle interaction and collision detection manually. While libraries like PixiJS can streamline this process, it often requires more setup compared to the straightforward integration of SVG with the HTML structure.
If interactivity and ease of integration are your primary concerns, SVG might be the better choice. For performance-intensive applications, or games, Canvas could be advantageous.
Node.js
Node.js is a server-side JavaScript runtime based on Chrome's V8 engine, renowned for its efficiency and performance in handling I/O-bound operations.
Utilizing JavaScript on both the client and server sides offers advantages such as code and library reuse, as well as reduced context switching for developers.
Even if you are not using it for backend development, you still need it for frontend tooling and package management.
Install Node.js
Use brew:
brew install node
If you want to use multiple versions of node, install nvm (node version manager), then use nvm to install and manager different versions of node.
npm
npm(node.js package manager) is a tool for installing, updating third part packages, or publishing your own packages.
It consists of a regitry and a command line tool, which is typically installed alongside the Node.js binary.
npm uses a configuration file called package.json, which keeps the
track of your project's dependencies.
npm can also function as a task runner, with tasks defined in the
package.json file for automating various workflows.
Many laughed at npm, but they clearly haven't tried package mangers from other languages.
Basic npm commands:
npm init
npm install --save <pkg>
npm Alternatives
npm is known for its slowness and high disk space usage. If that's a problem for you, try yarn or pnpm.
How non-blocking IO works
Node.js can handle multiple I/O operations, such as reading an HTTP request or executing a database query, concurrently without blocking program execution. Unlike other languages and runtimes, a single Node.js process can manage concurrent requests simultaneously, Making it a great choice for web applications.
When you initiate an I/O operation, Node.js does not wait for it to complete. Instead, it lets it run in the background and proceeds to execute other code. Once the I/O operation is complete, a callback function (or promise resolution) is invoked, allowing you to handle the result.
Underneath, Node.js leverages libuv, a cross-platform abstraction layer for asynchronous I/O, to handle these operations efficiently and in a non-blocking manner.
Alternatives
Node.js has been a dominant player in the JavaScript runtime landscape for years, but with the emergence of Deno and Bun, it’s facing some fresh competition.
Deno
Deno offers a more modern take, written in Rust, with built-in TypeScript support, Web-standard APIs and a secure environment by default. It is created by the original author of Node.js, it's like a Node.js 2.0 never adopted by the community.
Deno 2.0 is on the way, dressing the compability isssue with Node.js. A hard lesson learned.
Bun
Bun is a server-side JavaScript runtime built on JavaScriptCore, written in zig. It's extremely optimised for speed and performance.
Bun has also made a significant investment in developer experience, it comes with a comprehensive set of built-in tools and features including it's own package manager, bundler, test runner, sqlite support and more. Unlike Deno, it can be used a drop-in replacement for Node.js.
Node.js is Catching Up
Node.js has been evolving much faster recently, like TypeScript support, built-in sqlite, watch mode, test runner and many others are being borrowed from other runtimes.
TypeScript
TypeScript is a superset of JavaScript that introduces static typing. Its growing popularity stems from the added type safety and improved tooling, which many developers find invaluable for managing large codebases.
It is reassuring when you make numerous adjustments in a sizable codebase and all type checks pass, confirming that you haven't unintentionally broken anything.
However, some developers criticize TypeScript for its initial complexity. The type system is quite sophisticated, which can be daunting for newcomers or those from different programming backgrounds. Features such as generics, type inference, and union/intersection types may feel overwhelming at first, but they offer substantial advantages once mastered.
Running TypeScript
TypeScript has become increasingly integrated into various tools and runtimes, making it more accessible for developers.
Deno and Bun support TypeScript natively, allowing you to run TypeScript files directly without additional setup. This is a significant advantage for rapid development and prototyping.
The latest version of Node.js has introduced the
--experimental-strip-types flag, which simplifies running TypeScript
files by stripping type annotations. However, this is mainly for
execution purposes, while TypeScript's static type checking still
requires the TypeScript compiler.
Using the TypeScript Compiler
To get started with TypeScript, install it and add it to package.json's devDependencies:
npm install --save-dev typescript
Next, add a script in your package.json:
"scripts": {
"tsc": "tsc -p ."
}
Edit your tsconfig.json to configure the compiler options:
{
"compilerOptions": {
"target": "es6", // Specify ECMAScript target version
"module": "commonjs", // Specify module code generation
"strict": true, // Enable all strict type-checking options
"esModuleInterop": true // Enable interoperability between CommonJS and ES Modules
},
"include": [
"src/**/*.ts" // Specify which files to compile
]
}
Compile and perform type checking by running:
npm run tsc
Learning TypeScript may take some time, but it's well worth the investment. For comprehensive resources, visit the TypeScript documentation.
Web Components
Componentization is a fundamental principle in software development that involves breaking down a complex system into smaller, self-contained units called components. These components are designed to perform specific tasks, interact with each other in a well-defined manner, and can be reused in different contexts.
Component-based UI development has become the predominant approach in modern web application development.
Web Components are a set of standards that enable you to create reusable custom elements. They consist of 3 main technologies:
- Custom Elements: Define your own HTML tags.
- Shadow DOM: Encapsulate styles and markup, preventing style leakage.
- HTML Templates: Define chunks of HTML that can be reused.
Defining a Custom Element
To create a custom element, you can extend the built-in HTMLElement class.
class MyElement extends HTMLElement {
constructor() {
super();
const shadow = this.attachShadow({ mode: 'open' }); // Create a shadow root.
const wrapper = document.createElement('div');
wrapper.textContent = 'Hello, Web Component!';
shadow.appendChild(wrapper);
}
}
customElements.define('my-element', MyElement); // the hyphen is madactory.
Using the Custom Element
You can now use your custom element in HTML like any other element.
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Web Components Demo</title>
<script src="path/to/your-component.js" defer></script>
</head>
<body>
<my-element></my-element>
</body>
</html>
Encapsulating Styles with Shadow DOM
The Shadow DOM allows you to style your custom elements without affecting the rest of the document.
class MyElement extends HTMLElement {
constructor() {
super();
const shadow = this.attachShadow({ mode: 'open' });
const style = document.createElement('style');
style.textContent = `
div {
color: white;
background-color: blue;
padding: 10px;
border-radius: 5px;
}
`;
const wrapper = document.createElement('div');
wrapper.textContent = 'Styled Web Component!';
shadow.appendChild(style);
shadow.appendChild(wrapper);
}
}
customElements.define('my-element', MyElement);
Using HTML Templates
You can define templates in your custom elements for reusable structures.
const template = document.createElement('template');
template.innerHTML = `
<style>
div {
color: white;
background-color: green;
padding: 10px;
border-radius: 5px;
}
</style>
<div>Template-based Web Component!</div>
`;
class MyElement extends HTMLElement {
constructor() {
super();
const shadow = this.attachShadow({ mode: 'open' });
shadow.appendChild(template.content.cloneNode(true));
}
}
customElements.define('my-element', MyElement);
Why Web Components
Web Components provide a way to create reusable, encapsulated custom elements in your web applications. While many other frameworks and libraries offer similar capabilities—often with more elegant syntax or enhanced functionality—there's a significant advantage to using Web Components. Some of those frameworks may become obsolete in a few years, but Web Components are built on web standards, ensuring they are future-proof.
One of the most powerful aspects of Web Components is their ability to coexist with other frameworks and libraries. You can use them as a lower-level technique, allowing you to leverage the strengths of Web Components while still utilizing your preferred framework for higher-level application logic and UI management.
React
While Web Components are future-proof, React is currently the most popular JavaScript library for building dynamic and interactive user interfaces.
React primarily does two things: it maps JavaScript state or data to the DOM and efficiently manages component rendering and updates.
ui = f(state)
Basics
Here's a minimal example:
import React from 'react' // why is this necessary?
import ReactDOM from 'react-dom/client'
function HelloWorld() {
return <div>Hello, world!</div>
}
const root = ReactDOM.createRoot(document.getElementById('root'))
root.render(<HelloWorld />)
How can you mix HTML in JavaScript? That's actually JSX, it is not part of the JavaScript standard and requires transpilation before the browser can parse and execute it.
During the transpilation process, JSX converts into React.createElement() calls, like this:
import React from 'react'
import ReactDOM from 'react-dom/client'
function HelloWorld() {
return React.createElement('div', null, 'Hello, world!')
}
const root = ReactDOM.createRoot(document.getElementById('root'))
root.render(React.createElement(HelloWorld))
JSX is optional, it's just syntactic sugar. If you're not using JavaScript or TypeScript, you might be utilizing other templating languages, such as Hiccup for ClojureScript.
Scaffolding
Setting up a modern frontend project can be time-consuming. You need to install various tools and write several configuration files for linting, compiling, bundling, and setting up a development server. Tools like Vite relieve you of this burden.
To create a new project with Vite, execute the following command:
npm create vite@latest
This will generate the following files (depending on your choices):
├── README.md
├── eslint.config.js
├── index.html
├── package.json
├── public
│ └── vite.svg
├── src
│ ├── App.css
│ ├── App.tsx
│ ├── assets
│ │ └── react.svg
│ ├── index.css
│ ├── main.tsx
│ └── "vite-env.d.ts"
├── tsconfig.app.json
├── tsconfig.json
├── tsconfig.node.json
└── vite.config.ts
Vite is a modern frontend build tool renowned for its speed. It uses esbuild for transpilation and Rollup for bundling.
React Alternatives
Compared to other frameworks, React is lightweight. Unlike Vue or Svelte, React is not a programming language, and unlike Angular, it is not a full-fledged framework. Additionally, React is recognized for its stability.
Hooks
Since everyone overwhelmingly embraced hooks, we won't bother with class component.
Why is hooks so popular? It's more intuitive, it's concise, and easier to reuse.
Most frequently used built-in hooks are:
- useState: Used to manage state within a functional component.
- useEffect: Used to perform side effects, such as fetching data or add event listeners.
- useRef: Used to create and manage mutable references.
- useCallback: Used to memoize functions, preventing unnecessary re-renders.
- useMemo: Used to memoize calculations, preventing unnecessary recalculations.
the Dependency Array
The most important feature of hooks is perhaps the dependency array. The hook is rerun once dependencies changes.
import React from 'react'
import { useState, useEffect } from 'react'
function MyComponent() {
const [count, setCount] = useState(0)
useEffect(() => {
console.log(`count: ${count}`)
}, [count]) // the dependency array
return <div onClick={() => setCount(count + 1)}>Hooks!</div>
}
Custom Hooks
You can write your own hooks. It's always tempting to do data fetching in a componnet, so we will write a usePromise to help us fetching data.
function usePromise(fn) {
const [state, setState] = useState({pending: true})
useEffect(() => {
fn().then(result => setState({result}), error => setState({error}))
}, []) // why not passing fn as dependency?
return state
}
To use usePromise:
function MyComponent(props) {
const {error, pending, result} = usePromise(() => fetch(props.url).then(x => x.text()), [])
if (error) {
return <div>error occoured</div>
}
if (pending) {
return <div>loading</div>
}
return <div>result</div>
}
What about adding a retry button?
Pitfalls
- Always pass the dependency array, even it's empty.
- Pass the right dependencies.
- All useX() must be called, must not be wrapped in if statements.
State in React
State is a JavaScript object that stores data which can change over time. This data is used to render the user interface (UI) and update it in response to user interactions or other events like timers or network requests.
useState
Managing state within a single component is straightforward using the useState hook:
import React, { useState } from 'react';
const Counter = () => {
const [count, setCount] = useState(0);
return (
<div>
<p>Count: {count}</p>
<button onClick={() => setCount(count + 1)}>Increment</button>
</div>
);
};
Immutability
React requires state to be immutable. This means you should not modify the state object directly. If you do, React might not correctly detect changes and update the DOM.
The following example will not work as expected:
import React, { useState } from 'react';
const Counter = () => {
const [state, setState] = useState({ count: 1 });
const increment = () => {
state.count += 1;
setState(state);
};
return (
<div>
<p>Count: {state.count}</p>
<button onClick={increment}>Increment</button>
</div>
);
};
To fix it, you need to create a new state object:
const increment = () => {
setState({ ...state, count: state.count + 1 });
};
This requirement aligns well with functional programming principles, making languages like ClojureScript, which enforce immutability, a good fit for React development.
While JavaScript does not have built-in immutable data structures, managing nested state can be cumbersome:
setState({
...oldState,
foo: {
...oldState.foo,
count: oldState.foo.count + 1
}
});
Libraries like Immer address this by allowing you to treat state as mutable within a specific scope, simplifying updates:
oldState.foo.count += 1;
Sharing State Between Components
The useState hook is confined to a single component. To share state between components, you typically pass it down as props. This can become complex with multiple nested components. We will explore more efficient solutions in later sections.
Advanced State Management
Context API
Context is React's own state mangement API. It provides a way to create global state that can be accessed by any component within the context provider.
const MyContext = React.createContext();
const MyProvider = ({ children }) => {
const [value, setValue] = React.useState("some state");
return (
<MyContext.Provider value={{ value, setValue }}>
{children}
</MyContext.Provider>
);
};
const MyComponent = () => {
const { value, setValue } = React.useContext(MyContext);
return <div>{value}</div>;
};
// Usage
<MyProvider>
<MyComponent />
</MyProvider>
Jotai
Jotai allows you to share state by reference, you get the reference, you get the state, very intuitive.
import { atom } from 'jotai'
import { useAtomValue, useSetAtom } from 'jotai'
const countAtom = atom(0)
const Counter = () => {
const count = useAtomValue(countAtom)
return (
<div>{count}</div>
)
}
const IncCounter = () => {
const setCount = useSetAtom(countAtom)
return (
<button onClick={() => {setCount(count => count + 1)}}>
Inc
</button>
)
}
const CountApp = () => {
return (
<>
<Counter />
<InCounter />
</>
)
Redux
Redux is one of the oldest state management solution for react. It's still widely used, especially in larger scale applications.
digraph Redux {
rankdir=LR;
node [
shape=rectangle,
color="#eeeeee",
style=filled,
fontcolor=black,
fontsize=14,
fontname="Monospace"];
Store [label="Store" fillcolor="lightyellow"];
Actions [label="Actions" fillcolor="lightyellow"];
Reducers [label="Reducers" fillcolor="lightyellow"];
View [label="View" fillcolor="lightyellow"];
View -> Actions [label="dispatch" color="#555555"];
Actions -> Reducers [label="sent to" color="#555555"];
Reducers -> Store [label="returns new state" color="#555555"];
Store -> View [label="provides state" color="#555555"];
}
Reducers and actions in Redux promots a clear separation of concerns. Actions define the "what" of state changes, while reducers handle the "how," making the flow predictable and easier to debug.
This structure improves maintainability, facilitates testing, and allows for easier scaling of applications. Ultimately, it leads to a more organized codebase, making it easier to manage complex state interactions. But it might feel overengineered for small projects.
import { connect } from 'react-redux';
const Counter = ({ count, increment }) => (
<div>
<p>{count}</p>
<button onClick={increment}>Increment</button>
</div>
);
const mapStateToProps = (state) => ({
count: state.counter,
});
const mapDispatchToProps = (dispatch) => ({
increment: () => dispatch({ type: 'INCREMENT' }),
});
export default connect(mapStateToProps, mapDispatchToProps)(Counter);
Action Middleware
Action middleware in Redux is a way to extend the store's capabilities by intercepting actions before they reach the reducer. This allows you to perform side effects, such as asynchronous operations or logging, without cluttering your components or reducers. Common examples include Redux Thunk for handling asynchronous actions and Redux Saga for managing complex side effects. Middleware enhances flexibility and keeps the codebase clean and organized.
Time Travel
One advantage of global state management over solutions like Jotai is that it allows for comprehensive inspection and manipulation of the entire application state and its history. This capability can greatly enhance debugging, testing, and overall comprehension of the application.
Redux Time Travel is a tool for Redux developers that enables exploration of the history of Redux applications, providing valuable insights into their behavior.
Single-Page Applications (SPAs)
A Single-Page Application (SPA) is a web application that loads a single HTML page and dynamically updates the content without reloading the entire page. This approach provides a more fluid and responsive user experience, similar to native apps. SPAs are a dominant paradigm in modern web development.
The main drawbacks of SPAs include increased complexity, slower initial load times, and issues with SEO.
How SPAs Work
At the core of a SPA is client-side routing. When a user navigates to a different page within the application, the browser updates the URL in the address bar, while the content is dynamically rendered and updated using JavaScript. This is facilitated by the browser's History API.
Before the History API, developers commonly used the hash fragment (#) in URLs for client-side routing.
What happens when a user directly requests a page, say /orders? In
this case, the server sends back a basic HTML skeleton. The browser
then downloads and executes JavaScript, which allows the client-side
router to load the appropriate component and render the content.
Using Frameworks
Most SPAs utilize a variety of tools, including a router (e.g., React Router), data-fetching abstraction (e.g., React Query), global state management (e.g., Redux), and a build pipeline/dev server (e.g., Webpack). This can be overwhelming for novice developers. Frameworks like Next.js or Remix can streamline this process for you.
React Router
React Router is a popular routing library for React single-page applications (SPAs).
The v7 release has merged features from Remix, introducing incrementally adoptable enhancements like code splitting, data loading, actions, server rendering, static pre-rendering, pending states, optimistic UI, and React Server Components (RSC).
npx create-react-router@latest my-app
cd my-app
You will see the following project structure:
├── README.md
├── app
│ ├── app.css
│ ├── root.tsx
│ ├── routes
│ │ └── home.tsx
│ └── routes.ts
├── "package-lock.json"
├── package.json
├── postcss.config.js
├── public
│ ├── favicon.ico
│ ├── "logo-dark.svg"
│ └── "logo-light.svg"
├── tailwind.config.ts
├── tsconfig.json
└── vite.config.ts
Install dependencies and start the development server:
npm i
npm run dev
Routing
Routing rules are defined in app/routes.ts:
import {
type RouteConfig,
route,
index,
} from "@react-router/dev/routes"
export default [
index("routes/home.tsx"),
route("about", "routes/about.tsx"),
] satisfies RouteConfig;
It's pretty straight forward:
index(file)defines the default page.route(path, file)maps a path to a.tsxfile.
Maybe we can simply use route('', 'routes/home.tsx') for the index, make it even more simpler.
Nested Routes
Nested routes are defined with child routes rendered through <Outlet /> in the parent route:
route("dashboard", "dashboard.tsx", [
index("home.tsx"),
route("settings", "settings.tsx"),
])
In dashboard.tsx:
import { Outlet } from "react-router"
export default function Dashboard() {
return (
<div>
<h1>Dashboard</h1>
<Outlet />
</div>
)
}
Layouts
Layouts are similar to nested routes, except they have nothing to do with the URL. You can also group together mutiple routing rules under a layout to share the same parent template.
layout("./auth/layout.tsx", [
route("login", "./auth/login.tsx"),
route("register", "./auth/register.tsx"),
])
Prefixes
[
...prefix("projects", [
index("./projects/home.tsx"),
route(":pid", "./projects/project.tsx"),
route(":pid/edit", "./projects/edit-project.tsx"),
]),
]
It's just a syntax sugar for:
[
route("projects", "./projects/home.tsx"),
route("projects/:pid", "./projects/project.tsx"),
route("projects/:pid/edit", "./projects/edit-project.tsx"),
]
I'm not satisfied with this ..., it should be removed, and let the
router handle it.
Type-safe Parameters
In the above example, there is a :pid in the path:
route("projects/:pid", "./projects/project.tsx")
The paramters can be accessed in the component as following:
import type { LoaderArgs, ComponentProps } from "./+types.project"
export async function loader({ params }: LoaderArgs) {
console.log(params.pid)
}
export default function Component({ params }: ComponentProps) {
return <div>{params.pid}</div>
}
To make the import statement work, run npx react-router typegen,
which generates type definitions in
.react-router/types/app/routes/+types.project.d.ts.
Linking
Edit routes/home.tsx:
import type { MetaFunction } from "react-router"
import { Link } from "react-router"
export const meta: MetaFunction = () => {
return [
{ title: "New React Router App" },
{ name: "description", content: "Welcome to React Router!" },
]
}
export default function Index() {
return <Link to="/projects/99">Project 99</Link>
}
Visit http://localhost:5173 and click the link to navigate to /projects/99.
Server Actions
Server actions are functions defined using the name action.
The action function runs on the server and are removed from client bundles.
import type { Route } from "./+types/home"
import { Form } from "react-router"
export async function action({ request }: Route.ActionArgs) {
const formData = await request.formData()
const input = await formData.get("input") as string
return {
output: input?.toUpperCase()
}
}
export default function Page({ actionData }: Route.ComponentProps) {
return <Form method="post">
<input type="text" name="input" />
<button type="submit">Run on Serverside</button>
{actionData && <p>{actionData.output}</p>}
</Form>
}
If you need more flexibility, there is fetcher:
import type { Route } from "./+types/home"
import { useFetcher } from "react-router"
import { useState } from "react"
export async function action({ request }: Route.ActionArgs) {
const formData = await request.formData()
const input = await formData.get("input") as string
return {
output: input?.toUpperCase()
}
}
export default function Page() {
const fetcher = useFetcher()
const [val, setVal] = useState('')
const cb = () => {
fetcher.submit({ input: val }, { action: "/about", method: "post" })
}
return <div>
<input value={val} onChange={(e) => setVal(e.target.value)} />
<button disabled={fetcher.state !== 'idle'} onClick={cb}>Run on Server</button>
<div>{fetcher.data?.output}</div>
</div>
}
Why Server Action
Server action is a language level feature, it is implemented via code transformation and framework integration.
From the developer's perspective, it reduces complexity and provides better developer experience.
Server-Side Rendering
Server-Side Rendering (SSR) is a web development technique where the initial HTML content of a web page is generated on the server, rather than entirely by client-side JavaScript. This results in a fully rendered page being sent to the browser, making it immediately visible to the user without the need for JavaScript to be loaded and executed.
SSR is crucial for SEO and provides a better user experience as it improves loading times.
SSR vs Traditional Server-Side HTML Rendering
The original approach to web development involved generating static HTML pages on the server and sending them directly to the client's browser. However, as web applications grew more complex and interactive, the limitations of traditional server-side rendering became evident. Generating entire pages on the server for every user interaction could be inefficient, particularly for dynamic content.
The rise of JavaScript frameworks like Angular, React, and Vue introduced a new paradigm called client-side rendering. These frameworks enable developers to build dynamic and interactive web applications using JavaScript. While client-side rendering offers several advantages, such as faster updates and a more responsive user experience, it also presents some drawbacks, including potential SEO issues and slower initial page loads.
SSR was introduced as a hybrid approach, combining the benefits of both traditional server-side rendering and client-side rendering. It involves generating the initial HTML structure on the server and subsequently allowing client-side JavaScript to handle dynamic updates.
Technical Breakdown
SSR is more complex than traditional server-side HTML rendering.
Server-Side Rendering:
- The server receives a request for a web page.
- The server processes the request, fetch data, render components to HTML.
- The server generates the complete HTML markup for the page, including the initial state of any dynamic components.
- The server sends the rendered HTML to the client's browser.
Client-Side Hydration:
- The browser receives the fully rendered HTML and parses it.
- The browser executes any embedded JavaScript code.
- The client-side JavaScript framework takes over and "hydrates" the rendered HTML, connecting the JavaScript components to the DOM elements.
- The application can then handle user interactions and update the DOM dynamically.
Frameworks
SSR (Server-Side Rendering) frameworks are primarily written in JavaScript, as they need to render client-side components on the server.
Some most popular options:
- Next.js (React)
- Remix (React)
- Astro (framework agnostic)
- Nuxt (Vue)
- SvelteKit (Svelt)
Next.js
Next.js is a popular web framework built on React, known for its server-side rendering (SSR) support and file-based routing. It provides an excellent developer experience by automatically configuring the necessary tools for React and TypeScript, making it particularly user-friendly for beginners.
Initialize a New Project
To create a new Next.js project, run:
npx create-next-app@latest
It will generated following files:
├── README.md
├── next.config.mjs
├── "package-lock.json"
├── package.json
├── src
│ └── app
│ ├── favicon.ico
│ ├── fonts
│ │ ├── GeistMonoVF.woff
│ │ └── GeistVF.woff
│ ├── globals.css
│ ├── layout.tsx
│ ├── page.module.css
│ └── page.tsx
└── tsconfig.json
Start the dev server:
npm run dev
Routing
Next.js uses file-based routing. For example, the URL
http://localhost:3000/my-page corresponds to the file at
src/app/my-page/page.tsx.
If you access http://localhost:3000/my-page in the browser, you'll
see a 404 page. To create the page, add the following code to
src/app/my-page/page.tsx:
export default function MyPage() {
return <h1>My Page</h1>
}
Once saved, The page in browser should update it self as expected.
Fetching Data
Now let's add a new page src/app/users/page.tsx, to render some dynamic data:
export default async function Users() {
const response = await fetch('https://dummyjson.com/users')
const { users }: {
users: Array<{
id: number,
firstName: string,
lastName: string,
}>
} = await data.json()
return (
<div>
{users.map((user) => (
<div key={user.id}>
<div>
{user.firstName} {user.lastName}
</div>
</div>
))}
</div>
)
}
Visit http://localhost:3000/users to see the rendered page.
Dynamic Routing
To create a user detail page, first, add a link in users/page.tsx:
import Link from 'next/link'
{users.map((user) => (
<div key={user.id}>
<div>
<Link href={`/users/${user.id}`}>{user.firstName} {user.lastName}</Link>
</div>
</div>
))}
Clicking one of the links will lead to a URL like http://localhost:3000/users/26, which initially shows a 404 page.
To handle this route, create a new file at src/app/users/[id]/page.tsx with the following content:
export default async function User({ params }: { params: { id: string } }) {
const response = await fetch(`https://dummyjson.com/users/${params.id}`)
const user: {
firstName: string,
lastName: string,
email: string,
} = await data.json()
return (
<div>
<h1>{user.firstName} {user.lastName}</h1>
<div>{user.email}</div>
</div>
)
}
Once saved the web page in browser should update it self and show the user email.
Server Actions
Server actions are similar to RPCs; the client can invoke server-side functions as if they were client-side. Behind the scenes, it still makes an HTTP request. The compiler and the framework did a lot of work here like code splitting and transforming, so you don't need to manually create routes and call the APIs.
Here's how it looks like. Create a new page at src/app/products/page.tsx with the following content:
export default function Products() {
async function addProduct(data: FormData) {
"use server"
console.log(data.get('name'))
}
return (
<form action={addProduct}>
<div>
<label>
Name: <input name="name" />
</label>
</div>
<button type="submit">Submit</button>
</form>
)
}
Visit http://localhost:3000/products, fill out the form, and submit. In the network panel, you’ll see a POST request to the current path. Additionally, the server-side log will print the name, confirming that the function executed on the server.
useActionState
useActionState allows you to access the result of a form action.
It is a client only hook, and you will have to put "use client" on top of you soruce code file.
"use client"
import { useActionState } from 'react'
import { addProduct } from './action'
export default function Products() {
const [state, submitAction, isPending] = useActionState(addProduct, {message: ''})
return (
<form action={submitAction}>
<div>
<label>Name: <input name="name" /></label>
</div>
<button disabled={isPending} type="submit">Create</button>
<div>{state.message}</div>
</form>
)
}
And the action code have to be moved to a file with "use server":
"use server"
export async function addProduct(prevState: {message: string}, data: FormData) {
return { message: `${data.get('name')} saved` }
}
Astro
Most JavaScript frameworks render entire websites as large JavaScript applications. This approach, while simple, can leads to performance issues. Astro offers a solution with its Island Architecture.
Astro builds websites by breaking them into independent, encapsulated "islands" of functionality. Each island can be a single component, a group of components, or an entire page. Astro uses partial hydration, loading only the necessary JavaScript for each interactive island, resulting in faster page loads.
Getting Started with Astro
To create a new Astro project:
npm create astro@latest
This generates the following project structure:
├── README.md
├── astro.config.mjs
├── "package-lock.json"
├── package.json
├── public
│ └── favicon.svg
├── src
│ ├── components
│ │ └── Card.astro
│ ├── env.d.ts
│ ├── layouts
│ │ └── Layout.astro
│ └── pages
│ └── index.astro
└── tsconfig.json
Templates
Astro templates use the .astro file extension. Install the Astro
extension for Visual Studio Code for enhanced language support.
Astro templates consist of two parts:
- Variables: Defined between
---delimiters. - HTML Markup: The rest is HTML markup, this includes
<style>and<script>tags.
---
const items = ["Dog", "Cat", "Platypus"];
---
<ul>
{items.map((item) => (<li>{item}</li>))}
</ul>
Islands
Components are rendered as static HTML by default, no JavaScript is
sent to the client unless explicitly requested. To make a component
interactive, use the client directive:
<MyReactComponent client:load />
There are more options for the client directive:
client:idleload and hydrate component when browsers becomes idle, it has a lower priority asclient:load.client:visibleload and hydrate component when it enters the viewport
Routing
Astro uses file-based routing, similar to Next.js:
/ -> src/pages/index.astro
/about -> src/pages/about.astro
Dynamic Routing
/product/1 -> src/pages/product/[product].astro
/product/2 -> src/pages/product/[product].astro
To use dynamic routing, set the output option in astro.config.mjs to 'server' or 'hybrid':
export default defineConfig({
output: 'server',
})
In src/pages/product/[product].astro, parameters can be accessed via Astro.params:
---
const { product } = Astro.params
---
<div>{ product }</div>
Visiting http://localhost:4321/product/99 should give you "99".
APIs
Requests can also be mapped to a .ts or .js file, in this case the file acts as an API controller.
For Example:
/resource -> src/pages/resource.ts
export async function GET({params, request}) {
return new Response(JSON.stringify({ok: true}))
}
Accessing http://localhost:4321/resource should return {"ok":true}.
Markdown Support
Another interesting feature of Astro is it's built-in Markdown support, which makes it ideal for a lightweight CMS.
For example:
/about -> src/pages/about.md
WASM
WebAssembly, or WASM for short, is a low-level binary format initially designed to run efficiently in modern web browsers. It allows desktop applications written in C/C++ to be ported to the browser. Thanks to WASM, you can now run databases in the browser, such as DuckDB.
In addition to performance, security is another key advantage of WASM. It enables the execution of untrusted code in web applications. For example, quickjs-emscripten allows for the safe execution of untrusted JavaScript.
Unlike Java Applets and Flash, WebAssembly is an open standard not owned by any single company. This openness makes WASM an ideal compilation target, leading to the development of various WASM runtimes, including those for server-side applications.
As more runtimes have been developed, a standard was needed to define how WASM programs interact with the outside world. This is addressed by the WebAssembly System Interface (WASI).
While WASM was initially conceived to improve the performance of front-end JavaScript applications, it is now being used outside of web browsers in areas such as server-side applications and desktop software, emerging as a universal distribution format.
Error Tracking
Error tracking involves capturing and analyzing application errors, including error messages, stack traces, and contextual data.
Unlike backend applications, where logs can be accessed on the server, frontend logs must be deliberately captured and sent to a service.
Error tracking tools help collecting errors, identify common error patterns and monitor the progress of fixes.
Typically, a SDK is provided to collect logs, and a web UI is available to visualize trends and view error details.
Tools like Sentry.io and highlight.io also offer advanced features like session replay.
Firebase
Often referred to as Backend as a Service (BaaS), Firebase minimizes the work involved in managing user accounts and databases, allowing developers to focus on building the frontend(web and mobile) without the burden to set up and maintain a backend infrastructure.
It started as a real-time database service, then was bought by Google in 2014. Over time, Firebase expanded its offerings to include authentication, cloud storage, cloud functions, and more, becoming a comprehensive mobile/web app development platform.
Get Started
Go to firebase console and create a project.
It will generate the setup code for you, depend on your platform, like following for web:
<script type="module">
import { initializeApp }
from "https://www.gstatic.com/firebasejs/10.13.2/firebase-app.js";
const firebaseConfig = {
apiKey: "VJ4zTzfNe1v1kz9xeg_WjCI6yWSu8-uudLPwKD1",
authDomain: "project-id.firebaseapp.com",
projectId: "project-id",
storageBucket: "project-id.appspot.com",
messagingSenderId: "12345678901",
appId: "1:12345678901:web:3pvzJkqknm8x3cubfxcrrb"
};
const app = initializeApp(firebaseConfig);
</script>
Authentication
Firebase supports multiple sign-in methods, including email/password and various OAuth providers like Google, Facebook, and Twitter. To use a particular method, you need to enable it in the Firebase console first.
I prefer OAuth over email/password sign-in. You don't have to manage user accounts, it's also more secure, and it offers better user experience. Let's take Google as example:
import { getAuth, GoogleAuthProvider, signInWithPopup }
from 'https://www.gstatic.com/firebasejs/10.13.2/firebase-auth.js'
const auth = getAuth(app);
const provider = new GoogleAuthProvider();
provider.addScope('https://www.googleapis.com/auth/contacts.readonly');
signInWithPopup(auth, provider)
.then(result => {
const user = result.user;
}).catch(err => {
alert("Sign-in not successful.")
});
Database
At it's core is the Cloud Firestore, it's a document database, like MongoDB, it's schemaless, very flexible and easy to get started with.
import { getFirestore, collection, getDocs }
from 'https://www.gstatic.com/firebasejs/10.13.2/firebase-firestore.js'
const db = getFirestore(app);
async function getItems(db) {
const snapshot = await getDocs(collection(db, 'items'));
return snapshot.docs.map(doc => doc.data());
}
Security Considerations
Have you noticed that Firebase's apiKey is in your code? Normally,
we don't put keys in our code, especially in client-side code. But for
Firebase, it's inevitable.
This looks dangerous, but that doesn’t mean all your data is visible to an attacker. They still need to log in to access the data.
Firebase has built-in security rules that help protect your data. You should always define those rules to ensure that only authenticated users can access or modify your data as needed.
Supabase
Supabase is a alternative to Firebase built on PostgreSQL, and it's opensource.
Backend Development
Backend development serves as the foundation for modern web applications, facilitating the creation of dynamic and interactive user experiences.
It focuses on server-side logic, databases, and application programming interfaces (APIs), managing data processing, storage and retrieval to ensure the frontend receives the necessary information.
Server-side applications are typically hosted on physical or virtual machines, often running Linux, alongside databases that store and manage the application's data.
Common programming languages for backend development include JavaScript, PHP, Go, Java, Python, and Ruby. Popular frameworks include Express, Laravel, Django, Spring, and Ruby on Rails.
Request Handling
Backend development is primarily focused on handling requests from clients (e.g., web browsers, mobile apps) and processes them to generate appropriate responses. Some server-side applications generate HTML pages for these requests, while others offer APIs (Application Programming Interfaces) that provide data in formats like JSON.
+-----------+ +------------+ +-----------+ +----------+
| User | | Browser | | Server | | Database |
+----+------+ +-----+------+ +-----+-----+ +-----+----+
| | | |
| Click Button | | |
|-------------------> | | |
| | Send Request | |
| |-------------------->| |
| | | |
| | | Query Data |
| | |------------------->|
| | | |
| | |<-------------------|
| | | Return Data |
| | | |
| | Return API Data | |
| |<--------------------| |
| Update UI | | |
|<--------------------| | |
HTTP
HTTP (Hypertext Transfer Protocol) is the foundation of the World Wide Web. It's a standardized way for clients (like web browsers) to communicate with servers to request and receive data, primarily in the form of web pages.
A HTTP Request
Clients send requests to servers, specifying the resource they want to access (e.g., a webpage, image, or video). Servers respond to requests by sending the requested resource or an error message if the request cannot be fulfilled.
Let's initiate a request using curl:
curl -v https://httpbin.org/get
Use -v for verbose output, we'll see the following output:
> GET /get HTTP/2
> Host: httpbin.org
> User-Agent: curl/8.4.0
> Accept: */*
>
< HTTP/2 200
< date: Mon, 23 Sep 2024 08:27:20 GMT
< content-type: application/json
< content-length: 253
< server: gunicorn/19.9.0
< access-control-allow-origin: *
< access-control-allow-credentials: true
<
{
"args": {},
"headers": {
"Accept": "*/*",
"Host": "httpbin.org",
"User-Agent": "curl/8.4.0",
"X-Amzn-Trace-Id": "Root=1-66f32668-47a5d54b0c3385670c2ebbb0"
},
"origin": "31.282.0.91",
"url": "https://httpbin.org/get"
}
The section prefixed with > shows the request headers, while the
section prefixed with < displays the response headers. The final
part is the response body. Since this is a GET request, there is
typically no request body.
The presence of HTTP/2 indicates that this request uses HTTP version 2, which is a binary protocol. The output you see here has been decoded by curl for readability.
HTTP Methods
HTTP defines several methods for different types of requests, such as:
- GET: Retrieves a resource.
- POST: Sends data to a server to be processed.
- PUT: Updates a resource.
- DELETE: Deletes a resource.
- PATCH: Partially update a resource.
Status Codes
Servers indicate the outcome of a request using status codes, such as 200 (OK), 404 (Not Found), or 500 (Internal Server Error). Following is a list of frequently used status coeds:
Successful Responses (200-299)
- 200 (OK)
- 201 (Created)
Redirects (300-399)
- 301 (Moved Permanently)
Client Errors (400-499)
- 400 (Bad Request) incorrect method or content type or invalid payloads
- 401 (Unauthorized) you need to login
- 403 (Forbidden) you do have the permission
Server Errors (500 and above)
- 500 (Internal Server Error) unexpected error
- 502 (Bad Gateway) invalid response from an upstream server, timeout or crash
- 503 (Service Unavailable) server is down or unable to handle requests
HTTP Headers
HTTP requests and responses include headers providing additional
information. For example, Content-Type: application/json indicates
the the content is encoded as JSON.
Request Headers
- User-Agent: Identifies the client (browser, mobile device, bot, etc.).
- Accept: Specifies acceptable content types (e.g., HTML, JSON, XML).
- Referer: Indicates the referring URL.
Response Headers
- Content-Type: Specifies the MIME type of the response body.
- Content-Length: Indicates the length of the response body.
- Location: Specifies a redirect URL.
Caching Headers
Caching improves performance by storing frequently accessed data locally or on caching servers. HTTP headers control caching behavior.
Response Headers
- Cache-Control: Offers fine-grained caching control with directives like:
max-age: Specifies the maximum cache age (in seconds).public: Cacheable by any cache.private: Cacheable only by private cache(the browser).no-cache: Prevents caching.no-store: Prevents storage in any cache.
- ETag: A unique resource identifier used with
If-None-Matchfor conditional caching. - Expires: Sets an absolute expiration date/time for cached resources. (superseded by
Cache-Control) - Last-Modified: Indicates the date/time when a resource was last modified.
Request Headers
- If-Modified-Since: Specifies the last known modification time. If unchanged, the server returns a 304 Not Modified response.
- If-None-Match: Provides an ETag (Entity Tag) for resource identification. A matching ETag results in a 304 Not Modified response.
Authorization Headers
GET /protected-resource HTTP/1.1
Authorization: Bearer <token>
The Authorization header conveys client authentication credentials. The format depends on the authentication scheme:
- Basic Authentication: Base64-encoded username and password.
- Digest Authentication: A more secure alternative to Basic Authentication using cryptographic hashing.
- Bearer Token: A bearer token (often a JWT) is included as
Bearer <token>.
Cookie Headers
- Set-Cookie: The server sets a cookie on the client.
- Cookie: The client sends cookies to the server.
CORS Headers
CORS (Cross-Origin Resource Sharing) allows web pages to make requests to servers on different domains. This is crucial for modern web applications interacting with external APIs.
Request Headers
- Origin: Specifies the request's origin (protocol, domain, port).
- Access-Control-Request-Method: Specifies the HTTP method for the actual request (e.g., GET, POST).
- Access-Control-Request-Headers: Specifies custom headers for the actual request.
Response Headers
- Access-Control-Allow-Origin: Specifies allowed origins (
*for all, a specific origin, or a list). - Access-Control-Allow-Methods: Specifies allowed HTTP methods.
- Access-Control-Allow-Headers: Specifies allowed custom headers.
- Access-Control-Max-Age: Specifies the maximum preflight response cache age (in seconds).
- Access-Control-Expose-Headers: Specifies response headers accessible by JavaScript in the requesting origin.
Preflight Requests
Before cross-origin requests, browsers send a preflight OPTIONS
request to check server permissions. This request includes
Access-Control-Request-Method and Access-Control-Request-Headers.
Custom Headers
Custom headers (e.g., x-request-id for request tracing) can carry application specific information.
Request Handling and Routing
Routing maps incoming HTTP requests to specific application functions or handlers. Allowing different URLs to trigger different actions or return different content.
A Minimal Web Server
const http = require('http')
const server = http.createServer((req, res) => {
res.statusCode = 200
res.setHeader('Content-Type', 'text/plain')
res.end('Hello, world!\n')
})
server.listen(3000, '127.0.0.1', () => {
console.log('Server running at http://127.0.0.1:3000/')
})
Save as index.js and run it with node index.js. Visiting http://localhost:3000/anypage.html will return "Hello, world!".
Implementing Basic Routing
The below example adds more routes:
const http = require('http')
const server = http.createServer((req, res) => {
const url = new URL(req.url, `http://${req.headers.host}`)
if (url.pathname === '/') {
res.writeHead(200, { 'Content-Type': 'text/plain' })
res.end('Hello, world!')
} else if (url.pathname === '/about') {
res.writeHead(200, { 'Content-Type': 'text/plain' })
res.end('This is the about page.')
} else {
res.writeHead(404, { 'Content-Type': 'text/plain' })
res.end('Not found')
}
})
server.listen(3000, () => {
console.log('Server running at http://127.0.0.1:3000/')
})
This adds an /about page (http://localhost:3000/about) and handles 404 errors for incorrect URLs like http://localhost:3000/foo.
Using a Router
Web frameworks simplify route definition. Let's take Express.js as an example.
Install it with npm:
npm install --save express
Here's an Express.js equivalent:
const express = require('express')
const app = express()
app.get('/', (req, res) => {
res.send('Hello, world!')
})
app.get('/about', (req, res) => {
res.send('This is the about page.')
})
app.listen(3000, () => {
console.log('Server listening on port 3000')
})
This achieves roughly the same functionality as the previous examples but with a cleaner, more organized structure.
API Development
An API (Application Programming Interface) is a set of rules and protocols that allows different software applications to communicate with each other. It acts as a bridge between two systems, enabling them to exchange data and functionality.
In web development APIs commonly include:
-
RESTful APIs: These use the Representational State Transfer (REST) architectural style, employing HTTP methods (GET, POST, PUT, DELETE) to interact with resources.
-
RPCs (Remote Procedure Calls): RPC allows a client to invoke procedures on a remote server as if local.
-
GraphQL APIs: GraphQL is a query language enabling clients to request precise data, minimizing over-fetching.
JSON
JSON (JavaScript Object Notation) is a lightweight text-based data-interchange format. It's human-readable and easily parsed by machines. While originating from JavaScript, its use extends far beyond.
It supports six data types: objects, arrays, strings, numbers, booleans, and null. Example:
{
"name": "John Doe",
"age": 30,
"hobbies": ["reading", "hiking"]
}
JavaScript provides JSON.parse() and JSON.stringify() to convert
between JSON strings and JavaScript objects.
JSON Schema
JSON Schema (https://json-schema.org/) defines the structure and validation rules for JSON documents. It serves as documentation and aids in code generation (e.g., user interfaces).
Example:
{
"type": "object",
"properties": {
"firstName": {"type": "string"},
"lastName": {"type": "string"},
"age": {"type": "integer", "minimum": 0}
},
"required": ["firstName", "lastName"]
}
Validation
Ajv (https://ajv.js.org/) is a popular JSON schema validator.
const Ajv = require("ajv")
const data = { firstName: "foo" }
const ajv = new Ajv()
const validate = ajv.compile(schema) // Use schema from the example above
const valid = validate(data)
if (!valid) {
console.error(validate.errors)
}
JSON Schema Builders
JSON Schema can be verbose to write manually, schema builders can simplify the process. Here’s how to write a schema using TypeBox:
import { Type, Static } from '@sinclair/typebox'
const T = Type.Object({
x: Type.Number(),
y: Type.Number(),
z: Type.Number()
})
This is equivalent to:
const T = {
type: 'object',
required: ['x', 'y', 'z'],
properties: {
x: { type: 'number' },
y: { type: 'number' },
z: { type: 'number' }
}
}
You also get type for free:
type T = Static<typeof T>
// gives you:
type T = {
x: number,
y: number,
z: number
}
RESTful
RESTful API, or Representational State Transfer API, is a popular architectural style for building web services.
It is simple and easy to use, compared to previouse generation of web services architectures like SOAP.
RESTful API is resource-centric. All operations, including Create, Retrieve, Update, and Delete (CRUD), are performed on specific resources.
In HTTP those oparations are represented by HTTP Methods:
GET: Retrieves a resource.POST: Creates a new resource.PUT: Updates an existing resource.DELETE: Deletes a resource.
Hono
Hono🔥 is a small, simple, and ultrafast web framework built on Web Standards.
It was originally designed as a lightweight framework for building web applications on Cloudflare. Now It works on any JavaScript runtime: Cloudflare Workers, Fastly Compute, Deno, Bun, Vercel, AWS Lambda, Lambda@Edge, and Node.js.
API for a TODO Application
For simplicity, no database is used here.
import { Hono, Router } from '@hono/core'
import { serve } from '@hono/serve'
const app = new Hono()
const router = new Router()
const tasks = [
{ id: 1, title: 'Buy groceries' },
]
// Retrieve all tasks
router.get('/tasks', c => {
return c.json(tasks)
})
// Retrieve a specific task
router.get('/tasks/:id', c => {
const taskId = c.req.param('id')
const task = tasks.find(t => t.id == parseInt(taskId))
return task ? c.json(task) : c.notFound()
})
// Create a new task
router.post('/tasks', c => {
const task = c.req.body
task.id = tasks.length + 1 // Assign a unique ID
tasks.push(task)
return c.json(task)
})
// Update a task
router.put('/tasks/:id', c => {
const taskId = c.req.param('id')
const task = tasks.find(t => t.id == parseInt(taskId))
if (!task) {
return c.notFound()
}
tasks[tasks.indexOf(task)] = c.req.body
return c.noContent()
})
// Delete a task
router.delete('/tasks/:id', c => {
const taskId = c.req.param('id')
const taskIndex = tasks.findIndex(t => t.id == parseInt(taskId))
if (taskIndex == -1) {
return c.notFound()
}
tasks.splice(taskIndex, 1)
return c.noContent()
})
app.use('/api', router)
serve({ port: 3000, handler: app })
NestJS
NestJS is modeled after Spring Boot. It uses dependency injection, which is rare in javascript world.
It feels a lot heavier compared to Next.js.
Create a new project
NestJS provides a commandline tool, which can be installed via npm:
npm install -g @nestjs/cli
Now create a new project:
nest new nest-app
cd nest-app
Start the app:
npm run start:dev
Visit http://localhost:3000/
The Structure
Let take a loop at the generated files:
├── README.md
├── "nest-cli.json"
├── "package-lock.json"
├── package.json
├── src
│ ├── app.controller.spec.ts is for testing
│ ├── app.controller.ts
│ ├── app.module.ts
│ ├── app.service.ts
│ └── main.ts
├── test
│ ├── "app.e2e-spec.ts"
│ └── "jest-e2e.json"
├── tsconfig.build.json
└── tsconfig.json
We can see three key components here, namely controller, service and module.
Controllers are responsible for handling incoming HTTP requests and
returning appropriate responses. They act as the entry point for
external interactions with the application. In app.controller.ts it
just calls the servers.
import { Controller, Get } from '@nestjs/common';
import { AppService } from './app.service';
@Controller()
export class AppController {
constructor(private readonly appService: AppService) {}
@Get()
getHello(): string {
return this.appService.getHello();
}
}
Services encapsulate the business logic of an application. In
app.service.ts it returns a string. In real world, they should
handle complex operations, data manipulation, and interactions with
external resources. Put application logic in service also allows code
reuse, it can be called from multiple controller.
import { Injectable } from '@nestjs/common';
@Injectable()
export class AppService {
getHello(): string {
return 'Hello World!';
}
}
Modules are used to wire up the application. This feels a bit
overengineered. Take a look at the content of app.module.ts:
import { Module } from '@nestjs/common';
import { AppController } from './app.controller';
import { AppService } from './app.service';
@Module({
imports: [],
controllers: [AppController],
providers: [AppService],
})
export class AppModule {}
Add a module
nest also do scaffolding, since there is some bioleplatte code need
to be written.
To add a new module:
nest generate module cats
nest generate controller cats
nest generate service cats
It adds a new folder:
└── src
└── cats
├── cats.controller.spec.ts
├── cats.controller.ts
├── cats.module.ts
├── cats.service.spec.ts
└── cats.service.ts
It also updates app.module.ts.
In controller, the annotation @Controller('cats') decides this will
be accessable under the path /cats, but there is no handler yet,
let's add a method with the @Get() annotation:
import { Controller, Get } from '@nestjs/common';
@Controller('cats')
export class CatsController {
@Get()
findAll() {
return 'This action returns all cats';
}
}
Now visit http://localhost:3000/cats should return all cats.
That's it for the basics of nestjs, read the documentation for more.
GraphQL
GraphQL is a powerful query language for APIs that offers a flexible approach to fetching data from a server. Unlike RESTful APIs, which often have fixed endpoints, GraphQL enables clients to specify precisely what data they require and how it should be structured.
Here's a simple example of a GraphQL query:
query GetPosts {
posts {
id
title
content
}
}
In this case, the client receives only the data it requested.
GraphQL also allows for fetching data from multiple sources in a single request:
query GetPostsWithComments {
posts {
id
title
content
comments {
id
content
author
}
}
}
This decouples the client and server logic, enabling the server to focus on providing atomic data sources while allowing the client to compose them as needed.
Additionally, GraphQL addresses API compatibility issues by transforming the backend into a database, where the query language used is GraphQL instead of SQL.
GraphQL Servers
Popular GraphQL servers includes GraphQL.js, apollo and GraphQL Yoga.
GraphQL Clients
There are
apllo-client and
graphql-hooks, but for
simple queries fetch is good enough.
GraphQL Yoga
GraphQL Yoga is a GraphQL Server with great developer experience.
Setup
First, install the necessary dependencies:
bun add graphql graphql-yoga
Define Schema
Create a file named schema.ts with the following content to define your GraphQL schema:
import { createSchema } from 'graphql-yoga'
const typeDefinitions = /* GraphQL */ `
type Query {
hello: String!
}
`
const resolvers = {
Query: {
hello: () => 'Hello World!'
}
}
export const schema = createSchema({
resolvers: [resolvers],
typeDefs: [typeDefinitions]
})
Execute Query
Create a file named main.ts to run a sample query:
import { execute, parse } from 'graphql'
import { schema } from './schema'
const query = /* GraphQL */ `
query {
hello
}
`
console.log(await execute({
schema,
document: parse(query)
}))
Run the query:
bun main.ts
Expected output:
{
"data": {
"hello": "Hello World!"
}
}
Set Up GraphQL Server
Create a file named server.ts to set up a GraphQL server:
import { createServer } from 'node:http'
import { createYoga } from 'graphql-yoga'
import { schema } from './schema'
createServer(createYoga({ schema }))
.listen(4000, () => {
console.info('Server is running on http://localhost:4000/graphql')
})
Start the server:
bun server.ts
Test the GraphQL API
Open your browser and go to http://localhost:4000/graphql. Use the following query in the GraphQL Playground:
query {
hello
}
You should receive the following response:
{
"data": {
"hello": "Hello World!"
}
}
Send a Request via Curl
You can also test the GraphQL API using curl:
curl -X POST http://localhost:4000/graphql \
-H "Content-Type: application/json" \
-d '{"query": "query { hello }"}'
Expected output:
{"data":{"hello":"Hello World!"}}
More on Schema
It's also possible to create GraphQL schemas in typescript with Pothos. So that you don't have to write them seperately.
import { createYoga } from 'graphql-yoga'
import { createServer } from 'node:http'
import SchemaBuilder from '@pothos/core'
const builder = new SchemaBuilder({})
builder.queryType({
fields: (t) => ({
hello: t.string({
args: {
name: t.arg.string(),
},
resolve: (parent, { name }) => `hello, ${name || 'World'}`,
}),
}),
})
const yoga = createYoga({
schema: builder.toSchema(),
})
RPC
RPC (Remote Procedure Call) is a programming paradigm that allows a client program to call a procedure (or function) on a remote server as if it were a local procedure. This simplifies distributed computing by abstracting away the underlying network communication.
RPC vs RESTful API
Compared to RESTful APIs, RPCs offer better documentation and efficiency due to their typed nature.
Besides Unary RPCs where the client sends a single request to the server and gets a single response back. Unidirectional streaming and bidirectional streaming are also supported, which are not included in typical RESTful APIs.
RPC implementations are more complex than RESTful APIs. However, this can be mitigated by toolchains. Typically, you define the API in a format like Protocol Buffers, and tools generate client and server code automatically.
While RPC introduces complexity to set up initially compared to RESTful APIs, and might also require a steeper learning curve, but for larger-scale projects, it worth the trouble.
gRPC
gRPC is a open-source RPC framework developed by Google. It uses Protocol Buffers for serializing structured data.
Example of interface definition in Protocal Buffers:
// The greeter service definition.
service Greeter {
// Sends a greeting
rpc SayHello (HelloRequest) returns (HelloReply) {}
}
// The request message containing the user's name.
message HelloRequest {
string name = 1;
}
// The response message containing the greetings
message HelloReply {
string message = 1;
}
It resembles typical type definitions in programming languages, except
using message for data classes, and rpc for functions.
Have you noticed the numbers? This is the tricky part of Protocol Buffers.
In Protocol Buffers, numbers are used to uniquely identify fields within a message definition. Each field is assigned a unique integer number. This number is used to encode and decode the field's value in the serialized message representation.
Why? It's more efficient for encoding and maximizes compatibility with different client versions.
tRPC
tRPC is a TypeScript RPC framework.
A minimal example
We will use bun for simplicity, bun can be installed via brew install bun.
Install trpc:
bun add @trpc/server@next @trpc/client@next
Server
Add server.ts:
import { initTRPC } from '@trpc/server'
import { createHTTPServer } from '@trpc/server/adapters/standalone'
const { router, procedure } = initTRPC.create()
const appRouter = router({
userList: procedure.query(async () => {
return [{ name: "Alice" }, { name: "Bob" }]
}),
})
export type AppRouter = typeof appRouter
const server = createHTTPServer({
router: appRouter,
})
server.listen(3000)
Client
Add client.ts:
import { createTRPCClient, httpBatchLink } from '@trpc/client'
import type { AppRouter } from './server'
const trpc = createTRPCClient<AppRouter>({
links: [
httpBatchLink({
url: 'http://localhost:3000',
}),
],
})
const users = await trpc.userList.query()
console.log(users)
Run
Start the server:
bun server.ts
Run the client in another terminal:
bun client.ts
Why tRPC
tRPC offers exceptional developer experience (DX) with a fully typed API for clients. It is also easier to set up compared to gRPC. Overall, it's an ideal choice for a pure TypeScript stack.
Real-time Communication
Real-time communication in a browser involves establishing a persistent connection between a browser and a server, allowing for immediate, bidirectional exchange of data without the need for page refreshes. This is crucial for applications that require instant updates, such as online chat, collaborative editing, and online-gaming.
WebSocket
WebSocket is a full-duplex communication channel over a single TCP connection. Provides a low-latency, efficient way to exchange data between client and server. It can handle both text and binary data. Widely supported by modern browsers.
Server-Sent Events (SSE)
SSE is a simple server-to-client unidirectional communication mechanism. The server pushes updates to the client without the client needing to explicitly request them. Suitable for scenarios where the server needs to send frequent updates to the client.
SSE
SSE is a mechanism that allows servers to push data to clients in real-time over an HTTP connection. It's a simple, lightweight protocol that builds upon HTTP.
The EventSource API in the browser provides a straightforward way to handle SSE events. Here is how it works:
We create an EventSource object and specifies the URL of the SSE endpoint. The server then keeps the connection, and write events to it. The browser receives the events and trigger the onmessage on the EventSource object, now we can handle those events.
SSE is an underevaluated technology. Only in recent years, due to the slowness of LLM inference, has the adoption of SSE increased.
A minimal Implementation
const http = require('http')
const clients = []
const server = http.createServer((req, res) => {
if (req.headers.accept !== 'text/event-stream') {
res.writeHead(200, { 'Content-Type': 'text/html'})
res.end(`<!DOCTYPE html><html><script>
const es = new EventSource('/');
es.onmessage = e => console.log(e)
</script>Open console to see the logs!</html>`)
return
}
res.writeHead(200, {
'Content-Type': 'text/event-stream',
'Cache-Control': 'no-cache',
'Connection': 'keep-alive',
})
res.on('close', () => {
console.log('Client disconnected')
clients.splice(clients.indexOf(res), 1)
})
clients.push(res)
})
function sendToAll(eventName, payload) {
clients.forEach((res) => {
res.write(`data: ${JSON.stringify(payload)}\n\n`)
})
}
setInterval(() => {
sendToAll('timer', { message: `time is ${new Date}` })
}, 3000)
server.listen(3000, () => {
console.log(`http://localhost:3000`)
})
Run it with Node.js and open the url in browser to see it in action.
Websockets
WebSockets are a technology that provides full-duplex, persistent communication channels between a web server and a web client (typically a web browser).
Unlike HTTP, which is request-response based, WebSockets allow for continuous, bi-directional data exchange. This makes them ideal for applications that require real-time updates, such as online chat, collaborative editing, and real-time data visualization.
A Chat Application
Deno have websocket server builtin. Let's use Deno to write a simple chat application.
Add a server.js:
const clients = []
Deno.serve({
port: 3000,
handler: async (request) => {
if (request.headers.get("upgrade") == "websocket") {
const { socket, response } = Deno.upgradeWebSocket(request)
socket.onmessage = event => {
clients.forEach(x => {
if (x != socket) {
x.send(event.data)
}
})
}
socket.onclose = () => clients.splice(clients.indexOf(socket), 1)
socket.onerror = error => console.error("ERROR:", error)
socket.onopen = () => clients.push(socket)
return response
}
const file = await Deno.open("./index.html", { read: true })
return new Response(file.readable)
},
})
Add a index.html:
<!doctype html>
<style>
#container {
width: 400px;
padding: 50px;
display: flex;
gap: 20px;
flex-direction: column;
}
#output {
height: 300px;
border: 1px solid #ccc;
display: flex;
flex-direction: column;
align-items: flex-start;
gap: 5px;
padding: 5px;
}
.text {
background: green;
color: white;
border-radius: 5px;
padding: 5px;
}
.self {align-self: flex-end;}
.error {background: red;}
</style>
<div id="container">
<div id="output"></div>
<textarea id="input" rows=3></textarea>
</div>
<script>
const ws = new WebSocket("ws://127.0.0.1:3000")
const input = document.querySelector("#input")
function appendMessage(msg, type) {
const template = document.createElement("template")
template.innerHTML = `<div class="text ${type}">${msg}</div>`
document.querySelector("#output").appendChild(template.content.cloneNode(true))
}
input.addEventListener('keydown', (event) => {
if (event.key === 'Enter') {
event.preventDefault()
ws.send(input.value)
appendMessage(input.value, 'self')
input.value = ""
}
})
ws.onmessage = (e) => appendMessage(e.data, 'incoming')
ws.onerror = (e) => appendMessage(e.data, 'error')
</script>
Start the server, and open two browser tabs to chat.
deno run --allow-net=0.0.0.0:3000 --allow-read=index.html server.js
Authentication
Authentication is the process of verifying a user's identity before granting access to a system or application. It is essential for ensuring that only authorized users can access sensitive data and perform specific actions.
Common authentication methods include session-based authentication, token-based authentication, and OAuth.
Session
HTTP 1.1 is stateless.
Cookies are used to keep the session.
When user logins successfuly, an identifer(session ID) is put in the
Set-Cookie header, with HttpOnly set, so that the JavaScript
runtime can access it.
The browser will bring that identifier in the Cookie header in every
subsequent requests.
The server then uses the identifier to look up the corresponding user information, like user name, role and permissions, usually from cache or database.
This allows the server to track the user's actions and provide a personalized experience.
Signed Cookie
Every time the server receives a request it has to make a database query, to lookup the user information. This will be problematic when there are huge traffics.
Use a fast key-value store might be a solution. Another one is to store the user information in cookie, that will solve the scaling problem permenently.
Only one problem, what if the user modified the cookie? The user can pretent to be anyone, or grant self any perssions.
To adress this problem, we need to signed the cookie, so that server can check whether the cookie is tempered.
Signing is not encrypting, so no sensitive content should be put in cookie. The use can see the content, but any modification to it will be detected by the server.
JWT
JSON Web Token is a specification that defines a way to sign and exchange information.
Like signed cookies the information can be verified and trusted, but not encrypted, so it not for sensitive information.
The Format
JWT consist of 3 parts:
<header>.<payload>.<signature>
Header
Both header and payload are JSON strings that encode as base64 strings.
Header contains two fields, signing algorithm and type:
{
"alg": "HS256",
"typ": "JWT"
}
Payload
Payload contains the actual data, and also some standard fields like exp(expiration time) and sub(subject).
{
"exp": 17266579863,
"sub": "<user id>",
"name": "Alice"
}
All standard field name are shortened to 3 chars to save space. (But json is not a concise format anyway.)
Signature
Signature is computed as following:
HMACSHA256(base64(header) + '.' + base64(payload), secret)
Signing
A minimal implementation:
const crypto = require(crypto)
function generateJWT(payload, secretKey) {
const header = {
alg: 'HS256',
typ: 'JWT'
}
const encodedHeader = btoa(JSON.stringify(header))
const encodedPayload = btoa(JSON.stringify(payload))
const signature = crypto.createHash('sha256', secretKey)
.update(`${encodedHeader}.${encodedPayload}`)
.digest('base64')
return `${encodedHeader}.${encodedPayload}.${signature}`;
}
Verification
Just sign again and compare the signature. The exp value should also
be compared with current timestamp.
Expiration
Token with expration is much securer. The sub in Payload is
designed for this purpose. It should a epoch time in the future.
Revoke a Token
The is no way to revoke a JWT, but you can mantain a blacklist in a kv store or database, and better load it into memory for efficiency.
OAuth
OAuth is an authorization framework that allows third-party applications to access a user's data without sharing their credentials. It involves an authorization server and resource server, where users grant permission to apps to access their resources.
+---------+ +-----------------+ +--------------+
| User | | Authorization | | Resource |
| | | Server | | Server |
+----+----+ +--------+--------+ +------+-------+
| | |
| 1. Request Authorization | |
|--------------------------->| |
| | |
| 2. Redirect with Auth Code | |
|<---------------------------| |
| | |
| 3. Request Access Token | |
| with Auth Code | |
|--------------------------->| |
| | |
| 4. Return Access Token | |
|<---------------------------| |
| |
| 5. Access Resource using Access Token |
|-------------------------------------------------------->|
| |
| 6. Serve Resource |
|<--------------------------------------------------------|
| |
Implementing an OAuth Client
Let's take Github as an example. Go to Github and create an OAuth App.
You will need to fill in the The Authorization callback URL, which is a URL on your owner server.
When finished, you will get a client ID and a client secret.
The Authentication Process
-
Redirect user to Github's auth page at https://github.com/login/oauth/authorize?client_id={clientID} .
-
When succeed, Github will redirect to your application, with the auth code appended in the URL.
-
In your application where the callback URL points to, ask Github for access token using the auth code.
fetch("https://github.com/login/oauth/access_token", {
method: "POST",
headers: {
"content-type": "application/json",
accept: "application/json",
},
body: JSON.stringify({
client_id: clientID,
client_secret: clientSecret,
code: ctx.req.query('code'), // the auth code parsed from URL
})
})
-
Github issues the access token.
-
Ask Github for user info using access token.
fetch("https://api.github.com/user", {
headers: {
Accept: 'application/vnd.github+json',
Authorization: `Bearer ${access_token}`,
'User-Agent': '<your app>',
'X-GitHub-Api-Version': '2022-11-28',
}
})
- Github returns user info.
Security
Web security is a crucial and fasinating topic.
Common vulnerabilities in Web Applications includes:
- SQL Injection
- Cross-Site Scripting (XSS)
- Cross-Site Request Forgery (CSRF)
- Broken Access Control
- Sensitive Data Exposure
- Weak or improperly implemented cryptographic algorithms.
MITM
In MITM (Man-in-the-Middle) attacks, the attacker intercepts and manipulates data transmitted between two parties such as a server and a client(e.g., a web browser).
Using HTTP is like running in the public without clothes. Every thing you submited, downloaded is visible to the attacker. In recent years, browsers started to mark HTTP websites as not secure.
Always use HTTPS to encrypt communications between your web server and users.
Here's how HTTPS works to prevent MITM attacks:
-
When a client connects to a server using HTTPS, they initiate a handshake process. During this process, the server presents its SSL/TLS certificate to your browser.
-
Your browser checks the certificate to ensure it's valid and issued by a trusted CA (Certificate Authority).
-
If the certificate is valid, your browser and the server establish a secure connection using encryption.
-
Data is transmitted between your browser and the server, encrypted to protect its privacy.
XSS
Cross-Site Scripting (XSS) attacks happen when malicious code is injected into a web page and executed by the user's browser.
XSS can be used to steal sensitive information, such as cookies, session tokens, or credit card details. And by stealing session tokens, attackers can gain unauthorized access to a user's account. Attackers can also use XSS to modify the appearance of a website or display malicious content.
To mitigate this risk, avoid placing untrusted content, such as
user-generated content, directly into HTML. Instead, use a suitable
template library or encode the content appropriately. In libraries
like React, content is always encoded unless you use
dangerouslySetInnerHTML, so exercise caution with that method.
CSRF
Cross-Site Request Forgery (CSRF) is a type of attack where a malicious site causes a user's browser to perform unwanted actions on a trusted site while the user is logged in.
How CSRF Works
- A user logs into a trusted website, creating an authenticated session.
- An attacker crafts a malicious link or form designed to target the trusted website.
- The attacker presents this link or form to the user through various means, such as email, ads on legitimate websites, or instant messages.
- When the user clicks the malicious link or submits the form, their browser sends a request to the trusted website, executing the attacker's action without the user's consent.
Example
Consider a banking website that allows users to transfer funds between accounts. An attacker could create a malicious link that targets the bank’s transfer page. If a logged-in user clicks the link, their browser would send a request to transfer funds to the attacker's account, unbeknownst to the user.
In reality, many banks implement additional verification measures, such as SMS codes or one-time passwords (OTPs), to enhance security and prevent unauthorized transactions. These measures are crucial in protecting users against CSRF attacks and other forms of fraud.
Prevention
One effective method to prevent CSRF attacks is the Synchronizer Token Pattern. This involves generating a unique, unpredictable token for each user request, storing it in a session variable, and including it as a hidden field in forms. When handling form submission, the server verifies the token against the session to ensure that the request originates from the original website.
Docker
Containers are revolutionary.
The offer VM-like isolation with significantly reduced size and memory overhead.
They provide automated image creation, streamlined distribution via registries, and a layered image structure, minimizing disk space and network bandwidth usage.
Containers have become a cornerstone of cloud computing, enabling developers to build, deploy, and scale applications more efficiently and reliably.
Setup
To install Docker, run the following command:
brew install --cask docker
Start the Docker daemon by typing "Docker" in Spotlight and pressing Enter.
To verify that the installation was successful, run a container from the hello-world image:
docker run hello-world
You should see the following message:
Hello from Docker!
This message shows that your installation appears to be working correctly.
Docker Compose
Docker Compose is a tool for defining and running multi-container Docker applications. It uses a YAML file to configure application services, networks, and volumes, making it easier to manage applications with interconnected containers.
While Docker Compose is not recommended for production use, especially for larger applications, it is ideal for setting up a platform-agnostic local development environment. Configuring a development environment can be time-consuming, particularly when onboarding new team members. Before Docker, Vagrant was commonly used for this purpose, relying on virtual machines.
Building Images
Build your own image
Let's write a simple Node.js app, add a index.js:
const http = require('http');
const server = http.createServer((req, res) => {
res.statusCode = 200;
res.setHeader('Content-Type', 'text/plain');
res.end('Hello, world!\n');
});
server.listen(3000, '0.0.0.0', () => {
console.log('Server started');
});
A Dockerfile is a text file describes how an image is
created. Following Dockerfile creates a Node.js container, copies your
project files, installs dependencies, and starts your application on
port 3000.
Add a Dockerfile:
FROM node:22
WORKDIR /usr/src/app
COPY index.js ./
EXPOSE 3000
CMD ["node", "index.js"]
Build image:
docker build -t my-app .
Run container from the image
docker run -p 3000:3000 -d --name app my-app
Verify it works:
curl localhost:3000
Remove the container:
docker rm -f app
Docker Compose
Docker Compose
Docker Compose is a tool for running multi-container Docker applications. It uses a YAML file to configure the services (containers) and their relationships. Often used for setting up local development.
docker-compose.yml:
services:
web:
build: .
ports:
- "3000:3000"
depends_on:
- db
- redis
volumes:
- ./src:/usr/src/app
db:
image: mysql:latest
environment:
MYSQL_ROOT_PASSWORD: your_password
MYSQL_DATABASE: your_database
volumes:
- ./data:/var/lib/mysql
redis:
image: redis:latest
Start the services:
docker-compose up
Databases
Databases are fundamental to modern computing, serving as repositories for storing and organizing data. They come in two primary categories: SQL (Structured Query Language) databases and NoSQL databases.
SQL Databases
SQL databases, often referred to as relational databases, have a long history, use a structured approach to store data in tables. Each table consists of rows (records) and columns (fields), and relationships between tables are defined using foreign keys. SQL databases are well-suited for applications that require structured data.
Popular opensource SQL databases include SQLite, MySQL and PostgreSQL
NoSQL Databases
NoSQL databases, offer a more flexible approach to data storage and retrieval. They are designed to handle large datasets, distributed systems, and dynamic data structures. NoSQL databases can be categorized into several types:
- Document databases: Store data in JSON-like documents. (MongoDB, CouchDB)
- Key-value stores: Store data as key-value pairs. (Redis)
- Wide-column databases: Store data in wide tables with columns that can vary per row. (Cassandra)
- Graph databases: Store data as nodes and relationships between them. (Neo4j)
Vector Databases
Vector databases are a specialized type of database designed to efficiently store and retrieve high-dimensional numerical data, such as vectors. These databases have gained significant attention in recent years, particularly in the fields of machine learning, natural language processing, and computer vision.
Unlike traditional databases that store data in rows and columns, vector databases store data as vectors. A vector is a sequence of numbers that represents a point in a high-dimensional space. By storing data in this format, vector databases can leverage powerful algorithms for similarity search and recommendation.
Vector databases are used for recommendation systems, image and video search and RAG systems.
Popular vector databases include ElasticSearch, Milvus and Chroma.
ACID
A transaction is a sequence of database operations that are treated as a single unit.
ACID (Atomicity, Consistency, Isolation, Durability) is a set of properties that ensure data integrity and consistency in database transactions.
- Atomicity: A transaction is either executed completely or not at all.
- Consistency: The database must remain in a consistent state before and after a transaction.
- Isolation: Transactions must be isolated from each other to prevent interference.
- Durability: The results of a committed transaction must be persistent and recoverable in the event of a system failure.
How is ACID achieved?
Atomicity: The database maintains a transaction log that records all operations performed within a transaction. If a transaction fails, the database can use the log to roll back changes, restoring the database to its previous consistent state. Periodic checkpoints mark points of consistency; if a failure occurs, the database can recover to the last checkpoint and replay the logged transactions from that point.
Consistency: The database enforces data integrity constraints, such as primary keys, foreign keys, and data types, ensuring that data remains consistent and valid throughout transactions.
Isolation: The database employs locking mechanisms to control concurrent access to data. Some databases also use timestamping to order transactions and prevent conflicts, ensuring that the operations of one transaction do not interfere with another.
Durability: The database implements a technique called Write-Ahead Logging (WAL). It writes log records before making changes to the database, ensuring that even in the event of a system crash, changes can be recovered from the log.
Distributed Database Systems
Distributed Database Systems have become increasingly essential for modern applications due to their high availability and scalability.
While distributed database systems are more complex to manage than centralized databases, there are many fully managed database services available. However, it is still necessary to understand how they work, even when using these services.
CAP Theorem
The CAP theorem states that a distributed database system can only satisfy two of the following three properties:
-
Consistency: All nodes in the system see the same data at the same time.
-
Availability: The system is always available for reads and writes.
-
Partition Tolerance: The system remains operational even if there are network partitions between nodes.
Clustering
Clustering involves grouping multiple database servers to form a single logical unit. This can be done for various reasons, such as increasing availability, improving performance, or providing redundancy.
Sharding
Sharding distributes data across multiple physical servers or nodes by partitioning the dataset based on specific criteria (e.g., range, hash, or list partitioning).
Sharding enables databases to handle larger datasets and higher traffic by distributing the load across multiple machines. It also minimizes the risk of a single point of failure. However, sharding is more complex than clustering and requires careful management of data distribution, making it challenging to maintain data consistency across shards.
Consistent Hashing
Consistent hashing is a technique used to distribute data across a cluster of servers in a way that minimizes the impact of adding or removing servers. This is particularly useful for distributed systems that may experience changes in server count over time. Here's how it works:
-
Hash Ring: A circular hash ring is created, with each node assigned a unique hash value.
-
Data Distribution: Data items are hashed using the same hash function as the nodes. Each data item is assigned to the node whose hash value is immediately clockwise on the ring.
-
Node Addition/Removal: When a node is added or removed, only a small subset of data items needs to be redistributed. The hash ring remains relatively unchanged, allowing most data items to retain their original node assignments.
Consistent hashing reduces data movement, improves scalability, and allows the system to handle changes in the number of servers without significant performance degradation. This also enhances availability since the system can continue providing service even if some nodes fail.
Consistency Levels
Consistency levels in databases define how data is synchronized across multiple nodes, determining the trade-offs between data availability and consistency:
-
Strong Consistency: All reads return the most recent committed data.
-
Eventual Consistency: Data will eventually become consistent across all nodes, but there may be temporary inconsistencies.
-
Causal Consistency: Updates that are causally related are seen in the same order by all nodes.
SQL
SQL (Structured Query Language) is the standard language used for interacting with and managing relational databases. It is essential for creating, retrieving, updating, and deleting data in databases. SQL has its origins in the 1960s and remains the predominant language for relational database management systems.
Key Concepts
- Tables: Structures that store data in rows and columns.
- Columns: Fields within a table that hold specific types of data (e.g., text, numbers, dates).
- Rows: Records within a table that represent individual data entries.
- Queries: Statements that retrieve data from a database based on specified criteria.
- Joins: Combine data from multiple tables based on common fields.
- Aggregates: Functions that perform calculations on a dataset (e.g., SUM, AVG, COUNT).
Setup PostgreSQL
To run PostgreSQL using Docker:
docker run --name postgres -p 5432:5432 -e POSTGRES_PASSWORD=my_password -d postgres
Connect to the PostgreSQL database using psql:
docker exec -it postgres psql -U postgres
Create a new database:
CREATE DATABASE mydb;
Use \l to List databases:
postgres=# \l
List of databases
Name | Owner | Encoding | Locale Provider | Collate | Ctype
-----------+----------+----------+-----------------+------------+------------
mydb | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8
postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8
Use \c to switch to the new database, c for connect:
\c mydb
Basic SQL
Create a New Table
CREATE TABLE customers (
id SERIAL PRIMARY KEY,
name VARCHAR(50),
email VARCHAR(100)
);
Use \dt to list tables:
mydb=# \dt
List of relations
Schema | Name | Type | Owner
--------+-----------+-------+----------
public | customers | table | postgres
Use \d to describe the table:
mydb=# \d customers
Table "public.customers"
Column | Type | Collation | Nullable | Default
--------+------------------------+-----------+----------+---------------------------------------
id | integer | | not null | nextval('customers_id_seq'::regclass)
name | character varying(50) | | |
email | character varying(100) | | |
Indexes:
"customers_pkey" PRIMARY KEY, btree (id)
Inserting Data
Insert a single record:
INSERT INTO customers (name, email)
VALUES ('Alice', 'alice@example.com');
Batch insert records:
INSERT INTO customers (name, email) VALUES
('David Lee', 'davidlee@example.com'),
('Emily Brown', 'emilybrown@example.com');
Retrieving Data
Retrieve all records:
mydb=# SELECT * from customers;
id | name | email
----+-------------+------------------------
1 | Alice | alice@example.com
2 | David Lee | davidlee@example.com
3 | Emily Brown | emilybrown@example.com
(3 rows)
Retrieve a specific record:
mydb=# SELECT * from customers WHERE id = 3;
id | name | email
----+-------------+------------------------
3 | Emily Brown | emilybrown@example.com
(1 row)
Retrieve a limited number of records in descending order:
mydb=# SELECT * from customers ORDER BY name DESC LIMIT 2;
id | name | email
----+-------------+------------------------
3 | Emily Brown | emilybrown@example.com
2 | David Lee | davidlee@example.com
(2 rows)
Updating Data
Update a specific record:
UPDATE customers
SET email = 'new_email@example.com'
WHERE id = 1;
Deleting Data
Delete a specific record:
DELETE FROM customers WHERE id = 2;
Note: In real-world applications, it's advisable to avoid using DELETE whenever possible. Reasons include:
- Irreversibility: Permanently deleting important data can lead to significant challenges in recovery.
- Data Integrity: Deleting data without consideration can result in inconsistencies and orphaned records (e.g., deleting a customer without addressing related orders).
- Audit Trails: Many organizations require a record of data changes for compliance and auditing purposes.
Instead of deleting records, consider implementing "soft deletes" by adding a deleted_at column or a status field to indicate deleted records.
Joining Tables
First, create a new table for orders:
CREATE TABLE orders (
id SERIAL PRIMARY KEY,
customer_id INT REFERENCES customers(id),
order_date DATE,
total_amount DECIMAL(10, 2)
);
Insert some records into the orders table:
INSERT INTO orders (customer_id, order_date, total_amount) VALUES
(1, '2024-10-01', 100.00),
(1, '2024-10-02', 50.50),
(3, '2024-10-03', 150.25);
To retrieve data from multiple tables, perform an inner join:
SELECT customers.name, orders.total_amount
FROM customers
INNER JOIN orders ON customers.id = orders.customer_id
WHERE orders.order_date BETWEEN '2024-01-01' AND '2024-12-31';
This query returns:
name | total_amount
----------------+--------------
Emily Brown | 150.25
Alice | 50.50
Alice | 100.00
Aggregation
In previous example we got two records for Alice, we need to add them
together, that can be achieved by the SUM function:
SELECT customers.name, SUM(orders.total_amount) AS total
FROM customers
INNER JOIN orders ON customers.id = orders.customer_id
WHERE orders.order_date BETWEEN '2024-01-01' AND '2024-12-31'
GROUP BY customers.id;
This query will produce:
name | total
----------------+---------
Alice | 150.50
Emily Brown | 150.25
Besides SUM, there are other aggregate functions like AVG, MIN, MAX,
and COUNT to calculate different statistics.
Connecting to PostgreSQL from JavaScript
Install the postgres package:
bun add postgres
Create index.ts with following content:
import postgres from 'postgres'
const sql = postgres({username: 'postgres', password: 'my_password'})
console.log(await sql`select * from customers`)
Running the script with:
bun index.ts
You should see an output similar to this:
[
{
"id": 1,
"name": "Alice",
"email": "new_email@example.com"
},
{
"id": 3,
"name": "Emily Brown",
"email": "emilybrown@example.com"
}
]
That's it, read the documentaion for more.
Index
What is a Database Index
A database index is a data structure that enhances the speed of data retrieval operations in a database. Instead of scanning every row in a table when you query a database, an index allows the database to quickly locate the data.
You can think of it like a book's index: it helps you find the relevant page containing the information you're seeking.
How Indexes Work
The most common type of index is the B-tree index. B-tree indexes are versatile and efficient for a wide range of query operations, including point queries, range queries, and sorting. They are also used to enforce uniqueness constraints.
A B-tree index is a self-balancing tree data structure employed by database systems to efficiently store and retrieve data. Its design minimizes the number of disk I/O operations required for search, insert, and delete operations, making it suitable for large datasets.
In essence, a B-tree organizes data in a way that facilitates quick location. Think of it as a well-organized bookshelf, divided into sections and shelves, with the data (like books) stored in a specific order. This structure allows you to swiftly narrow your search by checking the appropriate section and then quickly finding the right shelf.
Creating Indexes
Primary Index
Automatically created when a primary key is defined for a table, ensuring each row has a unique identifier.
Example:
CREATE TABLE users (
id SERIAL PRIMARY KEY,
username VARCHAR(50)
);
Using UUID:
CREATE TABLE users (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
username VARCHAR(50)
);
Composite Index
Created on multiple columns to optimize queries that involve those columns.
Example:
CREATE INDEX idx_users_name
ON users (first_name, last_name);
Unique Index
Ensures that all values in a specified column, or set of columns, are unique.
Example:
CREATE UNIQUE INDEX idx_unique_username
ON users (username);
When to Use Indexes
-
Frequently Queried Columns: If certain columns are frequently searched or filtered, creating indexes on those columns can significantly enhance query performance.
-
Join Conditions: Indexes on columns used in join operations can accelerate queries that combine data from multiple tables.
-
Sorting and Grouping: Indexes can improve the efficiency of queries that involve sorting or grouping data based on specific columns.
The Downsides
Indexes come with costs. Creating and maintaining indexes can introduce overhead for database operations like insert and update. In some cases, indexes may occupy more space than the actual data.
SQL Injection
SQL injection involves injecting malicious SQL code into a web application to manipulate databases or gain unauthorized access.
How It Works
Consider the following SQL query used to validate a user during login:
queryString = `SELECT * FROM users
WHERE user_name='${userName}' AND password='${hash(password)}'`
An attacker could manipulate the input by setting userName to
admin' --. This alters the queryString to:
SELECT * FROM users WHERE user_name='admin' -- AND password='***'
Here, the comment -- ignores the password check, allowing the
attacker to log in as an admin.
Prevention
To prevent SQL injection, always use parameterized queries or an Object-Relational Mapping (ORM) tool. This separates SQL statements from user data, ensuring that input is not treated as executable SQL code.
Using parameterized queries, the query would be written as:
queryString = `SELECT * FROM users
WHERE user_name=$1 AND password=$2`
client.query(queryString, userName, hash(password))
Libraries like postgres provide a more convenient syntax using tagged template, making it easier to write:
sql`SELECT * FROM users
WHERE user_name=${userName} AND password=${hash(password)}`
ORM
ORM (Object-Relational Mapping) is a programming technique that bridges the gap between object-oriented programming languages and relational databases like PostgreSQL. It provides an abstraction layer, enabling developers to interact with database tables and records using objects and methods rather than raw SQL queries.
Popular ORM options include Prisma and Drizzle. Other notable libraries include Hibernate for Java, SQLAlchemy for Python, and ActiveRecord for Ruby.
Using Drizzle
First, install the necessary packages:
bun add drizzle-orm pg
npm add --dev drizzle-kit @types/pg
Create Table
Create a schema.ts file:
import { integer, pgTable, varchar } from "drizzle-orm/pg-core"
export const usersTable = pgTable("users", {
id: integer().primaryKey().generatedAlwaysAsIdentity(),
name: varchar({ length: 255 }).notNull(),
age: integer().notNull(),
email: varchar({ length: 255 }).notNull().unique(),
})
Create a drizzle.config.ts file:
import { defineConfig } from 'drizzle-kit'
export default defineConfig({
out: './drizzle',
schema: './schema.ts',
dialect: 'postgresql',
dbCredentials: {
url: process.env.DATABASE_URL!,
},
})
Add a .env file:
DATABASE_URL=postgres://postgres:my_password@localhost:5432/postgres
Apply the schema:
$ bunx drizzle-kit push
[✓] Pulling schema from database...
[✓] Changes applied
Query
Create an index.ts file:
import { drizzle } from 'drizzle-orm/connect'
import { eq } from 'drizzle-orm'
import { usersTable } from './schema'
const db = await drizzle("node-postgres", process.env.DATABASE_URL!)
await db.insert(usersTable).values({
name: 'Alice',
age: 7,
email: 'alice@example.com',
})
await db
.update(usersTable)
.set({ age: 8 })
.where(eq(usersTable.email, 'alice@example.com'))
const users = await db.select().from(usersTable)
console.log(users)
The syntax is actualy more like a SQL builder than traditional ORM.
Run the query:
$ bun index.ts
[
{
id: 1,
name: "Alice",
age: 8,
email: "alice@example.com",
}
]
Bonus
Drizzle has an admin UI baked in:
bunx drizzle-kit studio
Redis
Before Redis, Memcached was a common key-value store. PHP, with its short-lived processes, necessitated a standalone cache service like Memcached because in-memory caching was ineffective.
Long-lived processes (e.g., Node.js) can often rely on in-memory caching. However, scenarios requiring shared state across multiple application instances still benefit from a standalone solution.
Redis extends beyond basic key-value caching by offering diverse data structures like lists, sets, and HyperLogLog etc. This versatility enables its use in various applications such as session storage, task queues, rate limiting, leaderboards, and more.
Setup
docker run -d --name my-redis-stack -p 6379:6379 redis/redis-stack-server:latest
Talk to the server:
docker exec -it my-redis-stack redis-cli
127.0.0.1:6379> PING
PONG
Basic Data Structures and Operations
A comprehensive list of Redis Commands can be found here.
Strings
127.0.0.1:6379> SET key value
OK
127.0.0.1:6379> GET key
"value"
Hashes
127.0.0.1:6379> HMSET user:1 name John age 30
OK
127.0.0.1:6379> HGET user:1 age
"30"
: in user:1 is used as a separator in the key
Lists
127.0.0.1:6379> LPUSH mylist item1 item2
(integer) 2
127.0.0.1:6379> LRANGE mylist 0 -1
1) "item2"
2) "item1"
Sets
127.0.0.1:6379> SADD myset element1 element2
(integer) 2
127.0.0.1:6379> SMEMBERS myset
1) "element1"
2) "element2"
Zsets
Zsets are ordered sets where each element has a score.
127.0.0.1:6379> ZADD myzset 1 item1 2 item2
(integer) 2
127.0.0.1:6379> ZRANGE myzset 0 -1 WITHSCORES
1) "item1"
2) "1"
3) "item2"
4) "2"
Use Redis in Node.js
There is the official client node-redis, but ioredis offers better develper experience.
MongoDB
MongoDB is a widely-used NoSQL database recognized for its flexibility, scalability, and performance.
Unlike traditional relational databases that require a predefined schema, MongoDB allows for a dynamic schema. This means you can store documents with varying structures within the same collection, elimate the need for schema management, like schema migrations.
Additionally, MongoDB employs its own query language, which is based on JavaScript, making it accessible for novice web developers without requiring them to learn SQL.
These features have significantly contributed to MongoDB's success.
Setup
docker run --name my-mongodb -p 27017:27017 -d mongo
Talk to MongoDB
Using mongosh:
docker exec -it my-mongodb mongosh
test> use mydb
switched to db mydb
mydb> db.mycoll.insertOne({name: 'Alice'})
{
acknowledged: true,
insertedId: ObjectId('66f8b8ee56e5da57011681ed')
}
mydb> db.mycoll.findOne({name: 'Alice'})
{ _id: ObjectId('66f8b8ee56e5da57011681ed'), name: 'Alice' }
Using the JavaScript client:
npm i --save mongodb
const { MongoClient } = require('mongodb')
async function main() {
const client = new MongoClient('mongodb://localhost:27017')
await client.connect()
const db = client.db('mydb')
const mycoll = db.collection('mycoll')
console.log(await mycoll.find().toArray())
}
main()
Message Queues
Message queues are a distributed computing pattern that enables applications to communicate asynchronously. They act as intermediaries between components, decoupling them and facilitating reliable, scalable, and flexible messaging.
Popular message queue systems include RabbitMQ and Kafka, while cloud-based options include Amazon Simple Queue Service (SQS) and Azure Service Bus.
Kafka
Kafka is a distributed streaming platform that was originally developed by LinkedIn in 2010. It was designed to handle real-time data pipelines at a massive scale, making it ideal for applications that need to process large volumes of data in real-time.
Core Concepts
In Kafka producers send messages to topics, which are categories for storing these messages. Consumers read messages from topics, and they can belong to consumer groups, allowing multiple consumers to share the load of processing messages from a topic. Each consumer in a group reads from different partitions of the topic, ensuring balanced and efficient data processing.
digraph KafkaArchitecture {
// Node definitions
node [shape=box];
Producer [label="Producer 1"];
Producer2 [label="Producer 2"];
TopicA [label="Topic A"];
TopicB [label="Topic B"];
subgraph cluster_ConsumerGroup1 {
label="Consumer Group";
Consumer1 [label="Consumer 1"];
Consumer2 [label="Consumer 2"];
Consumer3 [label="Consumer 3"];
}
// Connections
Producer -> TopicA [label="Produces"];
Producer -> TopicB [label="Produces"];
Producer2 -> TopicB [label="Produces"];
TopicA -> Consumer1 [label="Consumed by"];
TopicA -> Consumer2 [label="Consumed by"];
TopicB -> Consumer3 [label="Consumed by"];
}
Run Kafka
To run Kafka in a development setup, add a docker-compose.yml:
services:
kafka:
image: 'bitnami/kafka:latest'
ports:
- '9094:9094'
environment:
- KAFKA_CFG_NODE_ID=0
- KAFKA_CFG_PROCESS_ROLES=controller,broker
- KAFKA_CFG_LISTENERS=PLAINTEXT://:9092,CONTROLLER://:9093,EXTERNAL://:9094
- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092,EXTERNAL://localhost:9094
- "KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:PLAINTEXT,\
EXTERNAL:PLAINTEXT,PLAINTEXT:PLAINTEXT"
- KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=0@kafka:9093
- KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER
docker-compose up -d
Interact with Kafka
bun add kafkajs
Add a producer.ts
import { Kafka, Partitioners } from 'kafkajs'
const kafka = new Kafka({
clientId: 'my-app',
brokers: ['localhost:9094'],
})
const producer = kafka.producer({
createPartitioner: Partitioners.LegacyPartitioner
})
await producer.connect()
await producer.send({
topic: 'test-topic',
messages: [{ value: 'Hello Kafka!' }],
})
await producer.disconnect()
Add a consumer.ts
import { Kafka } from 'kafkajs'
const kafka = new Kafka({
clientId: 'my-app',
brokers: ['localhost:9094'],
})
const consumer = kafka.consumer({ groupId: 'test-group' })
await consumer.connect()
await consumer.subscribe({ topic: 'test-topic', fromBeginning: true })
await consumer.run({
eachMessage: async ({ topic, partition, message }) => {
console.log({
value: message.value!.toString(),
})
},
})
Then run producer.ts and and consumer.ts using bun:
bun producer.ts
bun consumer.ts
When done, stop and remove the container:
docker-compose down --remove-orphans
Testing
Unit Testing
Jest
Jest is a popular JavaScript testing framework that provides a simple, intuitive, and powerful way to write tests for your applications. It's particularly well-suited for testing React, Angular, and Node.js projects.
Jest is designed to work out of the box with minimal setup, making it easy to get started with testing.
Snapshot Testing is important feature of Jest, it generates snapshots of your components or data structures, so you don't need to write and update them manualy, which saves a lot of time.
The testing code looks like this:
import { add, subtract } from './my'
describe('Calculator', () => {
it('should add two numbers correctly', () => {
expect(add(2, 3)).toBe(5)
})
it('should subtract two numbers correctly', () => {
expect(subtract(5, 2)).toBe(3)
})
})
bun
bun has test runner built-in, and it's blazing fast!
$ bun test <<<
bun test v1.1.13 (bd6a6051)
my.test.ts:
✓ Calculator > should add two numbers correctly
✓ Calculator > should subtract two numbers correctly [0.06ms]
2 pass
0 fail
2 expect() calls
Ran 2 tests across 1 files. [10.00ms]
E2E UI Test
Cypress
Cypress is a modern end-to-end testing framework designed for web applications. It runs tests directly in the browser, providing a more reliable and intuitive experience compared to traditional testing tools.
The testing code looks like this:
describe('My Application', () => {
it('should have a welcome message', () => {
cy.visit('/welcome')
cy.contains('Welcome to my application').should('be.visible')
})
})
Performance Optimization
Web performance refers to how quickly and efficiently a web page loads and responds to user interactions.
Measuring Performance
tools
unlighthouse
npx unlighthouse --site <your-site>
Web Vitals
- First Contentful Paint
- Largest Contentful Paint
- Interaction to Next Paint
use the Web Vitals extension
Improving Perfromance
Key factors impacting web performance include:
-
Server Response Time: How quickly the server processes requests and sends responses.
-
Client-Side Rendering Time: How efficiently the browser renders the content on the client’s device.
-
Resource Loading: How efficiently the browser loads resources such as images, scripts, and stylesheets.
-
Network Latency: The time it takes for data to travel between the server and the client.
Server-Side Performance Optimization:
- Write efficient code and leverage non-blocking I/O to perform multiple operations at the same time.
- Create indexes on frequently queried columns to enhance database query performance and identify and fix slow queries.
- Cache frequently accessed data in memory, reducing the need for database lookups.
- Use the right database for the right job.
- Use load balancer to distribute traffic across multiple servers to improve scalability.
- Use circuit breakers and rate limiter.
Client-Side Performance Optimization:
- Apply code splitting strategies to load only necessary JavaScript based on user interactions.
- Use only essential third-party scripts and libraries to minimize overhead.
- Defer loading non-critical JavaScript to improve page load times.
- Use HTTP headers like
Expires,Cache-Control, andETagto manage resource caching in browsers. - Reduce image file sizes through compression and select appropriate image formats.
- Minify CSS and JavaScript.
- Defer loading images and resources that are not immediately visible to the user.
- Use prefetch technologies including
dans-prefetch,preconnect,prefetch,preloadandprerender:
<link rel="dns-prefetch" href="https://example.com">
- Use a CDN (Content Delivery Network) to distribute content across multiple servers globally, minimizing latency for users in various locations.
Microservices
Design is about pulling things apart. -- Rich Hickey
Microservices break down large applications into smaller, independent services that communicate with each other. This approach enables independent development, deployment, and scaling, allowing for the use of diverse technologies and languages. Smaller services are easier to understand, refactor and replace, and failures are isolated, preventing cascading application crashes. However, microservices increase overall system complexity due to the distributed nature of the application.
Core Components in Microservices
Configuration Management
Configuration data for microservices are usually stored and managed centrally. Can be implemented using tools like Consul.
They are also used for service discovery, a mechanism for microservices to discover and communicate with each other. Can be implemented using tools like Consul, ZooKeeper, or Eureka.
API Gateway
An API gateway serves as a single entry point for client interactions with microservices, managing request routing and providing security, rate limiting, and other cross-cutting concerns. Popular options for API gateways include Kong and Tyk
Message Broker
A message broker is ideal for decoupling micro services and serving as an event bus. Kafka and RabbitMQ are common choices.
Monitoring and Logging
Monitoring and logging tools track microservice health and performance, encompassing metrics, logs, and distributed tracing, with options like Prometheus, Grafana, and the ELK Stack being widely adopted.
Distributed Tracing
Distributed tracing tools like Jaeger and Zipkin are instrumental in visualizing and troubleshooting interactions across microservices.
Container Orchestration
Tools for deploying and managing microservices like Kubernetes.
Service Mesh
Service meshes provide an abstraction layer between services in a distributed application. They simplify inter-service communication by:
- Routing requests to the right services based on criteria like load balancing and A/B testing.
- Automatically registering and discovering services within the mesh.
- Enforcing security measures such as authentication, authorization, and encryption for service communication.
- Offering insights into service performance, health, and dependencies.
Notable service mesh solutions include Istio, known for its robust features like traffic management, service discovery, and security, and Linkerd, which emphasizes simplicity and performance.
Kubernetes
Containers played a pivotal role in the rise of microservices architecture, by offering a reliable, efficient, and scalable way to package and deploy individual services.
Kubernetes is an open-source platform for managing containerized applications. It automates many common tasks associated with running containers, such as deployment, scaling, and load balancing.
Key Concepts
A Kubernetes cluster consists of a control plane and multiple nodes. Each node hosts one or more pods.
A Pod is the smallest unit of deployment in Kubernetes. It represents a group of containers that share a network namespace and storage volumes.
A Deployment is a declarative specification of the desired state. It ensures that the desired number of pods are running and manages updates to them.
A Service is a network abstraction that defines how a group of pods can be accessed.
Ingress or Gateway exposes services to the outside world, allowing external traffic to reach the appropriate service within the cluster.
Setup
To try out k8s localy, there is minikube. On cloud platforms there is Google Kubernetes Engine (GKE), Amazon Elastic Kubernetes Service (EKS) and Azure Kubernetes Service (AKS).
brew install kubectl minikube
minikube start
kubectl get pods
Usage
kubectl create deployment my-app --image=kicbase/echo-server:1.0
kubectl expose deployment my-app --type=NodePort --port=8080
Get the service endpoint:
minikube service my-app
We've just deployed our app on minikube. We can also see our pod running via kubectl:
kubectl get pod
NAME READY STATUS RESTARTS AGE
my-app-7d48979fd6-rrjrx 1/1 Running 0 1m
Delete the pod:
kubectl delete deployment my-app
kubectl delete service my-app
K8s Manifest
K8s uses a YAML file called manifest to define your deployment and
service. Add a deployment.yml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 2
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: kicbase/echo-server:1.0
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: my-app-svc
spec:
selector:
app: my-app
ports:
- protocol: TCP
port: 80
targetPort: 8080
Applay the manifest file:
kubectl apply -f deployment.yml
We can see that there are 2 replicas, as specified in manifest:
kubectl get pod
NAME READY STATUS RESTARTS AGE
my-app-7c4f69b497-kpv2w 1/1 Running 0 7s
my-app-7c4f69b497-wrkjz 1/1 Running 0 7s
You can change it, and apply again.
Now access the service:
minikube service my-app-svc
Finally undo the deployment:
kubectl delete -f deployment.yml
Serverless
Serverless computing is a cloud computing model where developers can build, run, and manage applications without having to provision or manage servers. In this model, the cloud service provider handles the underlying infrastructure, scaling resources automatically based on demand.
Popular platforms include AWS Lambda, Azure Functions, Google Cloud Functions, Cloudflare and Vercel.
AWS
Amazon Web Services (AWS) is a leading cloud platform and a pioneer in cloud computing, offering a wide range of services.
Here are a few key services:
Lambda
Launched in November 2014, AWS Lambda revolutionized serverless computing, allowing developers to build and deploy applications without managing servers.
Example of a Lambda function in JavaScript:
export const handler = async (event) => {
return {
statusCode: 200,
body: JSON.stringify({ message: 'Hello from Lambda!' })
}
}
DynamoDB
AWS DynamoDB is a fully managed NoSQL key-value database designed for high performance.
Basic usage:
import AWS from 'aws-sdk'
const dynamoDB = new AWS.DynamoDB.DocumentClient()
dynamoDB.put({
TableName: 'MyTable',
Item: {
id: 1,
name: 'Alice',
age: 7
}
}, (err, data) => { })
dynamoDB.get({
TableName: 'MyTable',
Key: {id: 1}
}, (err, data) => {})
S3
AWS S3 (Simple Storage Service) provides scalable, durable, and cost-effective object storage. It is commonly used for hosting static websites and distributing content like images, videos, and documents globally.
Aurora
AWS Aurora is a fully managed, high-performance relational database compatible with MySQL and PostgreSQL.
Cloudflare
The Cloudflare Developer Platform is a suite of tools and services designed to help developers build and manage modern web applications. It offers various features to enhance performance, security, and reliability.
Key Features of Cloudflare Developer Platform
Workers
Cloudflare Workers is a serverless platform that executes JavaScript code at the network edge, resulting in faster performance and reduced latency by running closer to users. It uses V8 isolates, making it extremely lightweight and efficient.
To create a new worker project, run:
npm create cloudflare@latest -- worker0
cd worker0
This command generates the following file structure:
├── package.json
├── src
│ └── index.ts
├── test
│ ├── index.spec.ts
│ └── tsconfig.json
├── tsconfig.json
├── vitest.config.mts
├── worker-configuration.d.ts
└── wrangler.toml
In index.ts, the content is as follows:
export default {
async fetch(request, env, ctx): Promise<Response> {
return new Response('Hello World!');
},
} satisfies ExportedHandler<Env>;
Cloudflare’s API adheres to web standards, such as using the Response.
To start the server, run:
npx wrangler dev
Wrangler is the command-line interface for the Cloudflare Developer Platform.
To deploy, execute:
npx wrangler deploy
R2
R2 is a globally distributed object storage service that is highly scalable, durable, and cost-effective.
You can create a bucket using Wrangler:
npx wrangler r2 bucket create bucket0
To use the bucket, add a binding in wrangler.toml:
[[r2_buckets]]
binding = 'bucket'
bucket_name = 'bucket0'
You can access the bucket in your worker like this:
await env.bucket.put(key, req.body, {
httpMetadata: {
contentType: req.headers.get('content-type')
}
});
await env.bucket.get(key);
D1
D1 is Cloudflare’s native serverless database built on SQLite.
Create a D1 database using Wrangler:
npx wrangler d1 create db0
Add a binding in wrangler.toml:
[[d1_databases]]
binding = "db"
database_name = "db0"
database_id = "17b5fd39-701a-4b67-2103-3ea11d62be69"
To create a table, write the SQL commands in a file and execute it with Wrangler:
npx wrangler d1 execute moldable --local --file=./sql/files.sql
Use the --local flag to apply changes to the local database. When the schema is finalized, run the command again with the --remote flag.
Ad-hoc queries are supported with the --command option:
npx wrangler d1 execute moldable --local --command 'SELECT * FROM users;'
KV
KV is a distributed Key-Value Store that provides fast, reliable, and scalable storage for applications. It takes advantage of Cloudflare's global network, ensuring data is stored and distributed across multiple data centers.
To create a new namespace, run:
npx wrangler kv:namespace create kv0
Add a binding in wrangler.toml:
[[kv_namespaces]]
binding = "kv"
id = "0ae25e3203a0465de3c1a935e73fb92c"
Here is how to use it in your code:
await env.kv.put(key, JOSN.stringify(data))
await env.kv.get(key, {type: 'json'})
Pages
Cloudflare Pages is similar to Workers but is designed for dynamic front-end applications.
Infrastructure as Code
Infrastructure as Code (IaC) is a methodology that uses software development practices to manage and provision infrastructure. Instead of manually configuring servers, networks, and other IT resources, IaC leverages code to define and automate these processes.
The concept of IaC emerged in the early 2000s as a response to the growing complexity of IT environments. Early tools like Puppet and Chef pioneered the use of declarative languages to describe infrastructure configurations. Over time, IaC gained popularity with the rise of cloud computing and DevOps practices.
Tools
Ansible is a configuration management tool, created by Michael DeHaan and released as an open-source project in 2012. It quickly gained popularity due to its simplicity and agentless architecture. Ansible uses a YAML-based language, which is easy to read. An example to install Apache web server:
- name: Install Apache Web Server
hosts: web_servers
become: yes
tasks:
- name: Ensure Apache is installed
apt:
name: apache2
state: present
- name: Start Apache service
service:
name: apache2
state: started
Terraform was released in 2014 by HashiCorp, a company known for its open-source infrastructure tools, such as Packer and Consul. Terraform uses it's own language (HCL) to define infrastructure. Following code ensures that an aws EC2 instance is allocated:
resource "aws_instance" "example" {
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
key_name = "my-key-pair"
tags = {
Name = "My Instance"
}
}
Pulumi takes a different aproach, it uses programming languages for code, providing a more familiar and intuitive experience for developers. It supports multiple languages, following is creating a EC2 instance in JavaScript:
import * as pulumi from "@pulumi/pulumi"
import * as aws from "@pulumi/aws"
const instance = new aws.ec2.Instance("web-server", {
ami: "ami-0c55b159cbfafe1f0",
instanceType: "t2.micro",
keyName: "my-key-pair",
tags: {
Name: "web-server"
}
})
Unlike other IaC tools that are primarily for DevOps engineers, SST (Serverless Stack Toolkit) is tailored specifically for developers to build serverless applications. It is built on top of Pulumi, and uses TypeScript for code.
Why IaC
Setting up deployment environments is often tedious, repetitive, and time-consuming, and at the same time requiring significant expertise.
While automation can address certain aspects, IaC goes beyond simple automation. It provides version control and documentation for infrastructure, and IaC tools use high-level languages to abstract away much of the complexity involved in infrastructure management.
SST
SST (Serverless Stack Toolkit) is a framework that simplifies building serverless applications on AWS and Cloudflare. It provides a high-level abstraction over cloud resources, making it easier to define and deploy serverless infrastructure.
Unlike other infrastructure-as-code tools that create their own languages, SST uses TypeScript to define your infrastructure. This offers advantages like developer familiarity, code reuse, and type safety.
Deploying to Cloudflare
npm init -y
npx sst@latest init
npm install
Modify the run(){ } part of sst.config.ts as following, which will add a worker:
async run() {
const worker = new sst.cloudflare.Worker("MyWorker", {
handler: "./index.ts",
url: true,
});
return {
api: worker.url,
}
}
Add a index.ts:
export default {
async fetch(req: Request) {
return new Response(`hello!`)
},
}
Create Cloudflare API Token
Go to https://dash.cloudflare.com/profile/api-tokens, click "Create
Token", choose the template "Edit Cloudflare Workers". Save the token
to .env:
export CLOUDFLARE_API_TOKEN=m5wULRSb2TWym8zlgl1f5FZfhbmCZP3IrChvQWth
Deploy
Start dev mode:
npx sst dev
This will give you an URL of you API for dev purpose, and sst will keep it continusly deployed as code changes.
To make a production deployment, run:
npx sst deploy --stage production
This will give you a different URL as production API.
Link other resources
Modify sst.config.ts to add a new bucket and link it to the worker:
async run() {
const bucket = new sst.cloudflare.Bucket("MyBucket")
const worker = new sst.cloudflare.Worker("MyWorker", {
handler: "./index.ts",
link: [bucket],
url: true,
});
return {
api: worker.url,
}
}
Modify index.ts to use the bucket:
import { Resource } from "sst"
export default {
async fetch(req: Request) {
if (req.method == "PUT") {
const key = crypto.randomUUID()
await Resource.MyBucket.put(key, req.body, {
httpMetadata: {
contentType: req.headers.get("content-type"),
},
})
return new Response(`Object created with key: ${key}`)
}
if (req.method == "GET") {
const url = new URL(req.url)
const result = await Resource.MyBucket.get(url.pathname.slice(1))
return new Response(result.body, {
headers: {
"content-type": result.httpMetadata.contentType,
},
})
}
},
}
Verify the API
curl -X PUT --header "Content-Type: application/json" -d @package.json <URL>
curl -i <URL>/<key>
Why SST
As the about example shows, SST allows you to define your
infrastructure as code, eliminating the need to manually create
resources like workers and buckets through the cloud console or
wrangler. This significantly streamlines your development process.
Monitoring and Log Management
Monitoring and log management are essential components of modern software development and operations. They provide valuable insights into the health, performance, and behavior of applications, allowing developers and operations teams to identify and address issues proactively.
Monitoring
Monitoring involves collecting and analyzing data about the performance and health of an application or system. Key metrics include:
- Throughput
- Response times
- Error rates
- Resource utilization
Monitoring tools help visualize and analyze this data, identify trends, and detect anomalies. For example, you can use Prometheus for data collection and Grafana for visualization.
Log Management
Log files contain crucial information about application behavior, errors, and performance, making them vital for debugging and troubleshooting.
For small applications deployed on a limited number of machines, logs can be searched using tools like SSH and grep. However, for larger applications or ephemeral runtimes like containers, centralized logging is essential. This approach involves collecting logs from all servers, storing them in a structured format, and indexing them for fast search and retrieval.
A popular choice for log management is the ELK Stack (Elasticsearch, Logstash, and Kibana). Additionally, there are commercial services like Datadog and New Relic that offer robust log management and monitoring solutions.