Zettelkasten Knowledge Management System#
Zettelkasten knowledge and info management โข Zettelkasten Method
Revolutionary approach to building interconnected knowledge systems:
Core Principles:#
Atomic Notes: #
One Idea Per Note : Each note should contain exactly one concept
Unique Identifiers : Every note gets a permanent, unique ID
Self-Contained : Notes should be understandable without context
Evergreen : Notes are continuously refined and updated
Interconnected Structure: #
N
T
C
L
T
N
T
C
L
T
o
i
o
i
a
o
i
o
i
a
t
t
n
n
g
t
t
n
n
g
e
l
t
k
s
e
l
t
k
s
e
e
s
:
e
e
s
:
I
:
n
:
I
:
n
:
D
t
#
D
t
#
:
P
:
[
p
:
G
:
[
a
y
[
y
a
[
l
2
t
P
2
t
2
r
G
2
g
0
h
y
0
h
0
b
a
0
o
2
o
t
2
o
2
a
r
2
r
0
n
h
0
n
0
g
b
0
i
1
o
1
1
e
a
1
t
2
M
n
2
#
2
g
2
h
1
e
1
m
1
C
e
1
m
7
m
u
5
e
5
o
7
s
1
o
s
0
m
0
l
c
1
0
r
e
9
o
9
l
o
0
#
3
y
s
4
r
4
e
l
3
m
0
5
y
5
c
l
0
e
M
r
t
e
m
a
e
-
#
i
c
-
o
n
f
p
o
t
r
a
e
G
e
n
i
P
y
g
r
a
r
o
y
e
e
r
f
A
n
t
#
m
n
b
o
l
h
g
e
c
a
r
g
a
o
c
n
e
g
m
o
l
n
t
e
a
r
g
c
n
i
o
M
o
C
c
t
r
e
u
o
e
h
i
m
n
l
m
t
o
t
l
s
h
r
i
e
m
y
n
c
s
g
t
M
i
c
a
c
o
a
n
o
n
n
a
m
]
g
b
]
b
e
i
,
e
m
n
e
e
[
c
n
d
[
a
t
2
t
]
w
0
e
]
i
2
g
,
t
0
o
h
1
r
[
2
i
[
c
1
z
2
y
6
e
0
c
0
d
2
l
8
0
e
0
i
1
0
n
2
d
t
1
e
-
o
4
t
.
0
e
C
.
6
c
P
.
3
t
y
0
i
t
o
h
-
n
o
.
n
J
.
V
.
I
M
n
t
M
e
e
r
m
n
o
a
r
l
y
s
]
]
]
]
Implementation Strategies:#
Digital Zettelkasten: #
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
# 202012171030 - Python Memory Management
Python's memory management combines several strategies:
## Reference Counting
- Each object maintains a count of references
- When count reaches zero, object is immediately freed
- **Problem**: Cannot handle circular references
## Cycle Detection
- Periodic scan for unreachable circular references
- Uses mark-and-sweep algorithm
- **Trade-off**: Introduces pause times
## Memory Pools
- Small objects use pymalloc for efficiency
- Reduces fragmentation for common allocation patterns
- **Benefit**: Faster allocation/deallocation
**Connected Ideas:**
- [[202012150945]] - General GC algorithms
- [[202012160800]] - CPython implementation details
- [[202012180900]] - Memory profiling techniques
**References:**
- CPython source: Objects/obmalloc.c
- PEP 442: Safe object finalization
Linking Strategies: #
Progressive Summarization : Highlight key insights in existing notes
Index Notes : Create overview notes that link to related concepts
Concept Maps : Visual representation of note relationships
Tag Systems : Multiple categorization schemes
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
# Obsidian - Graph-based note taking
# Features: Link visualization, backlinks, graph view
# Roam Research - Bi-directional linking
# Features: Block references, daily notes, query system
# Zettlr - Academic writing focus
# Features: Citation management, LaTeX support
# Plain text + scripts
mkdir zettelkasten
cd zettelkasten
# Create new note with timestamp ID
new_note() {
local id = $( date +%Y%m%d%H%M)
local title = " $1 "
local filename = " ${ id } - ${ title // /- } .md"
cat > " $filename " << EOF
# $id - $title
## Content
## Links
-
## Tags
#
## References
-
EOF
echo "Created: $filename "
$EDITOR " $filename "
}
Maintenance Practices: #
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# Regular review and connection script
#!/bin/bash
# find-orphans.sh
echo "=== Orphaned Notes (no incoming links) ==="
for note in *.md; do
note_id = $( basename " $note " .md | cut -d'-' -f1)
if ! grep -l "\[\[ $note_id \]\]" *.md > /dev/null 2>& 1; then
echo "Orphan: $note "
fi
done
echo -e "\n=== Notes with few connections ==="
for note in *.md; do
link_count = $( grep -o "\[\[[0-9]*\]\]" " $note " | wc -l)
if [ " $link_count " -lt 2 ] ; then
echo "Few links: $note ( $link_count connections)"
fi
done
Benefits for Developers:#
Technical Knowledge Building: #
API Documentation : Personal notes on libraries and frameworks
Problem Solutions : Reusable solutions to common problems
Architecture Patterns : Design patterns and their applications
Learning Journal : Track understanding of complex concepts
Example Developer Zettelkasten: #
2
โ
โ
โ
2
โ
โ
โ
2
โ
โ
โ
0
โ
โ
โ
0
โ
โ
โ
0
โ
โ
โ
2
โ
โ
โ
2
โ
โ
โ
2
โ
โ
โ
0
0
0
1
L
T
C
1
L
T
C
1
L
T
C
2
i
a
o
2
i
a
o
2
i
a
o
1
n
g
n
1
n
g
n
1
n
g
n
7
k
s
t
7
k
s
t
7
k
s
t
1
s
:
a
1
s
:
a
1
s
:
a
1
i
1
i
1
i
0
t
#
n
1
t
#
n
3
t
#
n
0
o
d
s
5
o
k
s
0
o
a
s
:
o
:
:
u
:
:
l
:
-
c
-
b
-
g
B
k
N
D
e
C
K
o
R
D
r
e
e
K
o
r
l
L
u
r
o
o
i
r
t
u
c
n
u
o
b
i
u
c
d
w
b
k
e
s
a
e
t
n
k
g
#
o
e
e
t
t
d
r
h
d
e
e
n
r
r
r
e
e
n
m
-
r
e
k
n
s
r
B
e
s
r
N
t
e
N
I
a
t
o
C
e
w
t
t
e
#
P
l
e
#
b
o
t
o
y
e
t
s
,
a
s
n
i
n
w
r
p
s
w
e
n
e
n
t
o
k
e
o
r
N
c
S
t
,
a
r
i
s
S
r
v
o
i
e
w
i
k
n
,
e
k
i
d
n
r
o
l
n
s
g
r
i
c
e
g
v
r
e
e
,
p
v
n
e
P
i
k
a
r
#
o
i
g
s
o
A
c
i
s
H
c
r
c
,
r
l
e
n
t
N
o
o
t
e
#
t
g
s
g
e
s
n
L
n
,
o
,
c
t
t
t
m
T
o
e
r
#
o
w
a
a
y
a
t
L
i
S
p
n
o
N
i
p
p
d
w
o
t
y
e
n
r
e
n
p
e
o
a
h
s
r
e
k
t
e
i
s
B
r
d
m
t
f
c
i
w
r
n
a
k
B
s
e
o
t
n
o
s
g
l
i
a
m
r
i
g
r
,
a
n
l
m
o
k
n
g
a
D
a
n
i
s
c
n
e
n
s
n
e
i
c
s
c
,
g
r
n
e
i
e
,
v
g
r
g
w
i
,
,
n
e
D
c
,
i
o
e
I
E
g
c
n
x
P
h
k
d
g
t
e
t
e
i
r
e
r
e
r
s
e
r
f
d
c
s
n
o
,
C
o
s
a
r
o
v
l
m
c
m
e
N
a
o
p
r
a
n
n
o
y
m
c
s
s
e
e
i
e
s
t
e
n
t
h
a
s
h
i
n
g
Tmuxinator - Tmux Session Templates#
Templating tmux with tmuxinator
Powerful tool for creating and managing complex tmux session layouts:
Installation and Setup:#
Installation: #
1
2
3
4
5
6
7
8
9
10
11
12
# Install tmuxinator
gem install tmuxinator
# Or with bundler
echo 'gem "tmuxinator"' >> Gemfile
bundle install
# Set up shell completion (zsh)
echo 'source ~/.gem/gems/tmuxinator-*/completion/tmuxinator.zsh' >> ~/.zshrc
# Set editor for configuration
export EDITOR = 'vim' # or your preferred editor
Project Configuration:#
Basic Project Template: #
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
# ~/.tmuxinator/web-development.yml
name : web-development
root : ~/projects/my-website
# Optional: Run commands before starting
pre_window : cd ~/projects/my-website
# Terminal windows configuration
windows :
- editor :
layout : main-vertical
panes :
- vim
- # empty pane for terminal commands
- server :
panes :
- npm run dev
- # empty pane for server monitoring
- database :
panes :
- mysql -u root -p
- redis-cli
- monitoring :
layout : tiled
panes :
- htop
- tail -f logs/development.log
- watch -n 1 'ps aux | grep node'
- # empty pane for additional monitoring
Advanced Configuration: #
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
# ~/.tmuxinator/microservices.yml
name : microservices
root : ~/work/microservices
# Pre-execution commands
pre : docker-compose up -d postgres redis
post : echo "Development environment ready!"
# Environment variables
pre_window : export NODE_ENV=development
windows :
- api :
root : ~/work/microservices/api
layout : main-horizontal
panes :
- # Main development pane
- npm run dev:watch
- npm run test:watch
- frontend :
root : ~/work/microservices/frontend
layout : main-vertical
panes :
- npm run serve
- npm run test:unit -- --watch
- # Static analysis
- services :
root : ~/work/microservices
layout : tiled
panes :
- cd auth-service && npm run dev
- cd notification-service && npm run dev
- cd payment-service && npm run dev
- docker-compose logs -f
- monitoring :
layout : even-horizontal
panes :
- curl -s http://localhost:3000/health | jq
- watch -n 5 'docker ps --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}"'
Advanced Features:#
Custom Commands and Hooks: #
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
# ~/.tmuxinator/data-pipeline.yml
name : data-pipeline
root : ~/data-pipeline
# Startup sequence
pre :
- docker-compose up -d kafka zookeeper elasticsearch
- sleep 10 # Wait for services to be ready
# Custom window startup commands
pre_window :
- source venv/bin/activate
- export PYTHONPATH=$PWD
windows :
- kafka :
panes :
- kafka-console-consumer --topic events --bootstrap-server localhost:9092
- kafka-console-producer --topic events --bootstrap-server localhost:9092
- kafka-topics --list --bootstrap-server localhost:9092
- pipeline :
panes :
- python -m pipeline.consumer
- python -m pipeline.producer
- python -m pipeline.monitor
- analysis :
panes :
- jupyter notebook --port=8888 --no-browser
- python -c "import pandas as pd; import numpy as np; print('Ready for analysis')"
# Cleanup on exit
post : docker-compose down
Session Management: #
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
# Create new project configuration
tmuxinator new myproject
# Start a project session
tmuxinator start web-development
tmuxinator s web-development # short form
# List available projects
tmuxinator list
# Edit existing project
tmuxinator edit web-development
# Copy project configuration
tmuxinator copy web-development mobile-development
# Delete project
tmuxinator delete old-project
# Debug configuration
tmuxinator debug web-development
Integration with Development Workflow:#
Git Hooks Integration: #
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# .git/hooks/post-checkout
#!/bin/bash
# Start development environment after branch checkout
branch_name = $( git symbolic-ref --short HEAD)
case " $branch_name " in
"feature/" *)
tmuxinator start feature-development
;;
"hotfix/" *)
tmuxinator start hotfix-environment
;;
"main" | "master" )
tmuxinator start production-monitoring
;;
esac
Advanced Napkin Math for System Estimation#
Advanced Napkin Math: Estimating System Performance - SREcon19
Napkin Math - Simon Eskildsen
Essential skills for back-of-the-envelope system calculations:
Fundamental Constants:#
C
-
-
-
-
-
-
S
-
-
-
-
-
-
P
t
U
L
B
L
M
M
C
o
S
S
H
H
N
N
1
r
2
u
a
o
r
S
S
D
D
e
e
O
a
t
i
n
a
D
D
D
D
t
t
p
c
n
c
e
n
t
g
w
w
e
a
c
a
x
e
e
s
r
s
s
o
o
r
c
h
c
m
x
e
a
e
e
r
r
a
h
h
l
e
t
O
q
n
q
e
k
k
t
e
m
e
o
m
p
u
d
u
k
i
i
c
o
s
e
e
o
e
:
r
r
o
r
s
r
k
r
w
r
n
m
n
o
o
n
e
p
e
/
y
i
a
t
t
1
u
u
s
f
r
f
u
t
t
i
r
i
0
n
n
:
e
e
e
n
r
c
i
a
e
a
d
d
r
d
r
l
e
h
o
l
a
l
m
e
i
e
o
f
:
n
d
s
t
t
n
c
n
c
e
s
r
:
r
r
r
c
t
c
k
r
3
:
e
e
i
i
e
:
e
:
e
,
a
4
a
p
p
:
:
n
0
d
K
d
5
2
c
0
:
B
:
(
(
0
7
5
e
0
s
c
.
n
:
2
i
1
a
r
5
s
n
n
n
5
n
0
m
o
s
s
1
s
0
e
s
n
0
M
1
s
s
0
B
5
M
d
-
/
0
B
a
c
n
s
/
t
o
s
ฮผ
s
a
n
p
s
c
t
e
e
i
r
n
n
t
e
1
e
n
r
t
G
)
a
B
:
l
)
0
:
.
5
1
5
m
0
s
m
s
Capacity Planning Numbers: #
M
-
-
-
-
N
-
-
-
-
e
e
m
T
D
W
C
t
G
T
T
L
o
y
a
e
o
w
i
y
C
o
r
p
t
b
n
o
g
p
P
a
y
i
a
t
r
a
i
d
:
c
b
s
a
k
b
c
a
a
e
i
:
i
a
v
b
l
s
r
n
t
l
e
a
e
v
e
r
l
s
e
r
e
u
h
a
e
c
r
t
t
e
n
r
o
h
i
a
c
v
n
p
v
e
l
d
e
e
n
r
e
r
i
:
r
r
e
o
r
n
z
:
c
c
h
e
a
~
t
e
e
t
t
5
v
6
i
s
a
:
i
-
e
4
o
s
d
o
1
r
-
n
:
:
1
n
0
h
2
:
2
:
%
e
5
~
~
5
a
6
~
5
5
7
d
1
0
-
M
0
:
G
0
-
5
B
%
B
2
0
/
~
M
0
s
=
5
R
B
0
M
%
A
B
t
8
M
M
h
7
v
B
e
.
e
o
5
r
r
h
e
M
e
t
B
a
i
/
d
c
s
a
l
m
a
x
Estimation Techniques:#
Request Capacity Calculation: #
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
# Example: Web server capacity estimation
def calculate_web_capacity ():
# Server specifications
cpu_cores = 8
memory_gb = 32
# Application characteristics
avg_request_time_ms = 100
memory_per_request_mb = 2
cpu_utilization_target = 0.7
# CPU-bound calculation
requests_per_second_cpu = ( cpu_cores * 1000 / avg_request_time_ms ) * cpu_utilization_target
print ( f "CPU limit: { requests_per_second_cpu : .0f } req/s" )
# Memory-bound calculation
concurrent_requests_memory = ( memory_gb * 1024 ) / memory_per_request_mb * 0.8 # 80% utilization
requests_per_second_memory = concurrent_requests_memory / ( avg_request_time_ms / 1000 )
print ( f "Memory limit: { requests_per_second_memory : .0f } req/s" )
# Bottleneck is the lower value
capacity = min ( requests_per_second_cpu , requests_per_second_memory )
print ( f "Estimated capacity: { capacity : .0f } req/s" )
return capacity
calculate_web_capacity ()
Database Sizing: #
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
def estimate_database_size ():
# Business metrics
daily_active_users = 100_000
actions_per_user_per_day = 50
data_retention_days = 2555 # ~7 years
# Technical metrics
avg_record_size_bytes = 1024 # 1KB per record
index_overhead_multiplier = 1.5
replication_factor = 3
# Calculate storage requirements
daily_records = daily_active_users * actions_per_user_per_day
total_records = daily_records * data_retention_days
raw_data_gb = ( total_records * avg_record_size_bytes ) / ( 1024 ** 3 )
with_indexes_gb = raw_data_gb * index_overhead_multiplier
with_replication_gb = with_indexes_gb * replication_factor
print ( f "Daily records: { daily_records : , } " )
print ( f "Total records: { total_records : , } " )
print ( f "Raw data: { raw_data_gb : .1f } GB" )
print ( f "With indexes: { with_indexes_gb : .1f } GB" )
print ( f "With replication: { with_replication_gb : .1f } GB" )
# Growth planning (20% annual growth)
annual_growth = 1.2
five_year_size = with_replication_gb * ( annual_growth ** 5 )
print ( f "5-year projection: { five_year_size : .1f } GB" )
estimate_database_size ()
Network Bandwidth Estimation: #
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
def estimate_network_bandwidth ():
# Application characteristics
peak_requests_per_second = 10_000
avg_response_size_kb = 50
static_content_ratio = 0.6 # 60% cached/CDN
# Calculate bandwidth needs
dynamic_requests_per_second = peak_requests_per_second * ( 1 - static_content_ratio )
bandwidth_mbps = ( dynamic_requests_per_second * avg_response_size_kb * 8 ) / 1024 # Convert to Mbps
# Add safety margins
tcp_overhead = 1.1 # 10% TCP overhead
burst_capacity = 2.0 # Handle 2x peak traffic
required_bandwidth = bandwidth_mbps * tcp_overhead * burst_capacity
print ( f "Peak requests/s: { peak_requests_per_second : , } " )
print ( f "Dynamic requests/s: { dynamic_requests_per_second : , } " )
print ( f "Base bandwidth: { bandwidth_mbps : .1f } Mbps" )
print ( f "With overhead & burst: { required_bandwidth : .1f } Mbps" )
# Server recommendations
if required_bandwidth < 100 :
print ( "Recommendation: Single server with gigabit connection" )
elif required_bandwidth < 1000 :
print ( "Recommendation: Load balancer with multiple servers" )
else :
print ( "Recommendation: CDN + multiple regions" )
estimate_network_bandwidth ()
Common Calculations: #
D
-
-
C
-
Q
-
-
S
-
a
a
u
c
t
c
c
c
e
e
q
u
a
h
v
a
o
o
h
f
u
u
t
l
o
e
b
n
n
e
f
e
e
i
e
r
r
a
n
n
e
u
l
i
t
s
e
e
H
c
D
e
i
F
z
i
e
c
c
i
t
e
_
z
a
o
c
t
t
t
i
p
d
a
c
n
a
C
i
i
v
t
e
t
t
t
l
o
o
o
R
e
h
p
i
o
a
:
n
n
n
a
_
:
t
o
r
l
n
s
_
t
l
h
n
:
:
c
e
m
i
a
o
c
=
u
o
t
=
=
c
s
t
l
e
o
t
i
(
t
I
n
a
a
s
o
c
i
m
c
r
r
t
g
n
o
p
p
y
r
r
r
s
n
l
a
i
i
g
o
:
c
i
c
=
v
v
r
w
u
e
t
a
a
o
s
r
r
:
h
l
l
w
r
i
_
_
s
e
e
=
t
r
r
x
n
_
a
a
l
p
t
1
r
t
t
i
o
_
.
a
e
e
n
n
u
2
t
e
e
s
-
i
/
a
n
e
2
o
r
t
r
.
a
s
l
i
s
0
v
e
y
a
g
r
,
l
/
d
c
_
v
l
e
a
p
i
c
y
a
p
c
r
c
o
,
v
e
h
o
e
m
g
n
e
c
_
p
c
_
d
_
e
r
l
o
r
i
l
s
a
e
m
e
n
a
s
t
x
p
q
g
t
i
e
i
l
u
e
n
t
e
e
o
n
g
(
y
x
s
n
c
_
s
i
t
y
t
h
g
t
_
p
i
o
r
y
t
o
+
m
u
o
i
o
e
l
w
s
m
l
(
d
s
t
e
i
1
a
)
n
b
e
y
g
-
e
x
s
p
e
h
<
o
c
c
f
i
n
o
o
f
t
0
e
n
n
i
_
.
n
s
n
c
r
8
t
t
e
i
a
)
i
a
c
e
t
a
n
t
n
i
l
t
i
c
o
l
o
y
)
y
n
_
m
u
b
l
a
t
c
i
k
p
e
l
n
i
d
e
_
r
l
a
t
e
n
c
y
These tools and techniques represent essential skills for modern software development - building knowledge systems, managing development environments efficiently, and making informed architectural decisions through quantitative analysis.