Equity Analysis Dashboard

Central warehouse for all equity analysis. Live via Tailscale Funnel at:
`https://bowie-surface-book.tail8a8bda.ts.net`

#

Architecture

```
equity_analysis/
├── dashboard.py

Flask app (port 5051)
├── sectors.json

Sector/ticker metadata (NOT canonical commentary storage)
├── history.jsonl

Append-only audit log of edits
├── commentary.db

Canonical append-only commentary store (SQLite)
├── backups/

Auto-generated SQLite + JSONL commentary backups
├── README_DB_HARDENING.md

Safety notes for lower-capability models
├── memory_prices.json

TrendForce spot price snapshots (auto-populated)
└── templates/
└── equity.html

Frontend (dark theme, auto-refresh)
```

> **Critical:** Commentary is no longer canonically stored in `sectors.json`. Use the commentary API / SQLite-backed paths. Read `README_DB_HARDENING.md` before touching storage logic.

---

#

🚀 Agent Quick-Start

**Before editing anything:**
1. Read `sectors.json` — understand current structure
2. Check `history.jsonl` — recent edits for context
3. Verify dashboard is running: `curl -s -o /dev/null -w "%{http_code}" http://127.0.0.1:5051` → should return `200`
4. If not running: `cd /home/bowie/.openclaw/workspace/equity_analysis && nohup python3 dashboard.py > dashboard.log 2>&1 &`

**After making changes:**
1. Edit `sectors.json` (or use API — see below)
2. Append to `history.jsonl` for audit trail
3. Restart dashboard: `fuser -k 5051/tcp 2>/dev/null; sleep 1; cd /home/bowie/.openclaw/workspace/equity_analysis && nohup python3 dashboard.py > dashboard.log 2>&1 &`
4. Verify: `curl -s http://127.0.0.1:5051 | grep -o '.*'` → should return `Equity Analysis`
5. Log update to Notion Dashboard Documentation page (AgentOps > Equity Analysis Dashboard — Central Warehouse)

---

#

📋 Common Tasks

##

Add a New Ticker

**Step 1:** Determine the correct sector. Use GICS classification:
- Technology, Energy, Financials, Healthcare, Industrials
- Consumer Discretionary, Consumer Staples
- Communication Services, Utilities, Real Estate, Materials

**Step 2:** Add to `sectors.json`:
```json
{
"summary": "One-line description",
"catalysts": ["Catalyst 1", "Catalyst 2"],
"price_target": null,
"notes": "",
"detailed_thesis": "Bear case first, then bull case. See Thesis Checklist below.",
"commentary_log": [],
"analysis_ts": "2026-03-15 21:00"
}
```

**Step 3:** Append to `history.jsonl`:
```json
{"ts": "2026-03-15 21:00", "ticker": "TICKER", "field": "added", "old": "", "new": "Added to Technology"}
```

**Step 4:** Restart dashboard, verify, log to Notion.

##

Update an Existing Ticker's Thesis

**Step 1:** Read current thesis from `sectors.json`
**Step 2:** Archive old thesis to `commentary_log` before replacing:
```python

Move current thesis to commentary log FIRST
from datetime import datetime
ts = datetime.now().strftime('%Y-%m-%d %H:%M')
old_thesis = data['Sector']['names']['TICKER']['detailed_thesis']
if old_thesis:
data['Sector']['names']['TICKER']['commentary_log'].append({
'ts': ts,
'text': f'[ARCHIVED THESIS] {old_thesis}'
})
```
**Step 3:** Write new `detailed_thesis`
**Step 4:** Update `analysis_ts` to current time
**Step 5:** Append to `history.jsonl`:
```json
{"ts": "2026-03-15 21:00", "ticker": "TICKER", "field": "detailed_thesis", "old": "previous text...", "new": "updated text..."}
```
**Step 5:** Restart, verify, log to Notion.

##

Add Commentary (without rewriting thesis)

**Commentary quality standard:** write a **real research note**, not a one-liner. Target **at least one or two paragraphs** unless the user explicitly wants something terse. Good modal commentary should include:
- what changed in the tape / thesis
- why it matters
- what the market may be missing
- which references informed the note

**Preferred path — API:**
```python
import requests
requests.post(
'http://127.0.0.1:5051/api/ticker/TICKER/commentary',
json={
'text': '''First paragraph: what changed and why it matters.

Second paragraph: valuation / positioning / catalyst interpretation.

References used:
1. Latest earnings call transcript
2. 10-Q
3. Management guidance notes'''
}
)
```

**Do not directly edit commentary inside `sectors.json`.** Canonical commentary now lives in SQLite / the commentary API.

##

Update Sector Thesis

**Step 1:** Read current thesis
**Step 2:** Edit `thesis` field at sector level (not inside `names`)
**Step 3:** Also update `best`/`worst` if the thesis change affects which names are strongest/weakest
**Step 4:** Append to `history.jsonl`:
```json
{"ts": "2026-03-15 21:00", "ticker": "SECTOR:Technology", "field": "thesis", "old": "...", "new": "..."}
```

##

Add a New Sector

Don't. The 11 GICS sectors are fixed in `SECTOR_ORDER` in `dashboard.py`. If you need a new sector, you must:
1. Add it to `SECTOR_ORDER` in `dashboard.py`
2. Add it to `sectors.json`
3. Update the frontend if needed

---

#

✅ Thesis Quality Checklist

Every `detailed_thesis` should follow this structure:

**1. Bear Case First (always start here)**
- What could go wrong? What's the market missing?
- Be specific: "Valuation at 100x P/E prices in perfection" not "expensive"
- Include quantifiable risks when possible

**2. Bull Case (with validation)**
- Why does this still work despite the risks?
- What catalysts drive the thesis?
- What's the moat / competitive advantage?

**3. Conviction Indicators**
- Is this a core position or satellite? (note in `notes` field)
- What would change your mind? (key risk triggers)
- Time horizon: near-term catalyst vs. structural tailwind

**4. Contrarian Test**
- If the consensus is bullish, what's the bear case that everyone is ignoring?
- If the consensus is bearish, what's the bull case that the market is pricing too pessimistically?
- If you're agreeing with consensus, you're probably not thinking hard enough

**Bad thesis example:**
> "NVDA dominates AI chips. CUDA moat is strong. B200 ramp will drive revenue."

**Good thesis example:**
> "Bear: NVDA at 40x forward P/E prices in sustained AI capex growth. If hyperscaler spending plateaus (like cloud did in 2023), multiple compresses 30%+ overnight. China export restrictions add regulatory tail risk. Bull: CUDA ecosystem moat is genuine — 90%+ of AI workloads run on NVDA. B200 ramp doubles inference throughput, locking in share. Every AI dollar still flows through NVDA first. Risk triggers: hyperscaler guidance cuts, China policy escalation, AMD MI300 market share gains above 15%."

---

#

📏 Data Conventions

**Ticker symbols:** Always uppercase.

**Timestamps:** `YYYY-MM-DD HH:MM` format (24-hour, local CT).

**`commentary_log`:** Append-only. **Never delete entries.** This is the permanent record of all analysis evolution. When updating `detailed_thesis`, always archive the old thesis to commentary_log first:

```python

ARCHIVE BEFORE REPLACING
old = data[sector]['names'][ticker]['detailed_thesis']
if old:
data[sector]['names'][ticker]['commentary_log'].append({
'ts': datetime.now().strftime('%Y-%m-%d %H:%M'),
'text': f'[ARCHIVED THESIS] {old}'
})
```

Prefix archived theses with `[ARCHIVED THESIS]` so they're easily distinguishable from live commentary. This preserves the full analytical history — future agents can see what changed and why.
```json
{"ts": "2026-03-15 21:00", "text": "Observation here"}
```

**`notes` field:** Use for flags/speculation, e.g.:
- `"⚠️ SPECULATIVE — illiquid options, high-beta sympathy only"`
- `"Contrarian GLP-1 play at 12x earnings vs LLY at 100x+"`

**`price_target`:** Numeric or null. If set, dashboard shows PT % distance from current price.

**`catalysts`:** Array of strings. 2-4 items max. Near-term catalysts preferred.

**Best/Worst per sector:** Every sector should have `best` and `worst` tickers with rationale. These display as colored badges on the dashboard.

**SNDK note:** Front expiry options can be liquid during rallies, but 2+ weeks out chain thins rapidly (8-14% bid-ask spreads). Always verify liquidity before recommending.

---

#

🔍 After-Edit Checklist

After any edit to `sectors.json`:

- [ ] **Restart dashboard** — process doesn't hot-reload
```bash
fuser -k 5051/tcp 2>/dev/null; sleep 1
cd /home/bowie/.openclaw/workspace/equity_analysis
nohup python3 dashboard.py > dashboard.log 2>&1 &
sleep 2
curl -s -o /dev/null -w "%{http_code}" http://127.0.0.1:5051

Should return 200
```
- [ ] **Verify the change** — curl the API or check in browser
```bash
curl -s http://127.0.0.1:5051/api/sectors | python3 -c "import json,sys; d=json.load(sys.stdin); print(d['Technology']['names']['NVDA']['detailed_thesis'][:100])"
```
- [ ] **Log to Notion** — update Dashboard Documentation page with what changed
- [ ] **Append history.jsonl** — audit trail for every edit

---

#

API Endpoints

| Endpoint | Method | Description |
|----------|--------|-------------|
| `/` | GET | Dashboard HTML |
| `/readme` | GET | This README rendered as HTML |
| `/api/sectors` | GET | Raw sectors.json |
| `/api/detail/` | GET | Full ticker detail + options flow |
| `/api/ticker/` | PUT | Update ticker fields |
| `/api/ticker//commentary` | POST | Add commentary |
| `/api/ticker//add` | POST | Add new ticker to a sector |
| `/api/sector/` | PUT | Update sector thesis |
| `/api/history?ticker=X` | GET | Edit history |
| `/api/memory_prices` | GET | DRAM/NAND spot prices |

##

Programmatic Updates

```python
import requests
BASE = 'http://127.0.0.1:5051'

Update thesis
requests.put(f'{BASE}/api/ticker/NVDA', json={
'detailed_thesis': 'Updated analysis...',
'price_target': 150.0,
'analysis_ts': '2026-03-15 21:00'
})

Add commentary
requests.post(f'{BASE}/api/ticker/NVDA/commentary', json={
'text': 'Unusual options activity noted'
})

Add new ticker
requests.post(f'{BASE}/api/ticker/INTC/add', json={
'sector': 'Technology',
'summary': 'Legacy chipmaker, turnaround play',
'catalysts': ['Foundry pivot', 'Government subsidies']
})
```

---

#

Memory Spot Prices

Scraper runs automatically on startup + every 4 hours. Fetches from DRAMeXchange (TrendForce):
- **DRAM:** DDR5 16Gb, DDR4 16Gb, DDR4 8Gb
- **NAND:** 512Gb TLC, 256Gb TLC (wafer)

Data stored in `memory_prices.json` as time-series. Sparkline chart renders when 3+ snapshots exist. Served via `/api/memory_prices`.

---

#

⏰ Daily Evaluation Cron

A cron job runs **weekdays at 9:45 AM CT** (market open + 15 min).

- **Script:** `daily_eval.py` — scans all 41 tickers, checks analysis vs current data
- **Flags:** valuation inconsistencies, stale analysis (14+ days), 52-week extremes, price target mismatches
- **Output:** Logged to `memory/YYYY-MM-DD.md`, summary announced to Telegram
- **Agent action:** Reviews flags, updates `sectors.json` if needed (with archival), restarts dashboard

To run manually: `python3 /home/bowie/.openclaw/workspace/equity_analysis/daily_eval.py`
To modify schedule: `openclaw cron edit 8ba13f2b-f7fc-4147-ba35-8f39f20164c8 --cron "HH MM * * 1-5"`

---

#

⚠️ Known Gotchas

1. **Never overwrite history.** When updating `detailed_thesis`, archive the old version to `commentary_log` first with `[ARCHIVED THESIS]` prefix. `commentary_log` is append-only — never delete entries. Future agents need to see the analytical evolution.
2. **Dashboard doesn't hot-reload.** Must restart after `sectors.json` changes.
2. **0DTE GEX is meaningless.** ATM gamma 3-4x higher near expiry — always skip `dte==0`.
3. **SNDK liquidity varies.** Front expiry can be liquid during rallies; further-dated chains thin fast (8-14% spreads). Always verify before recommending.
4. **TrendForce scraper is fragile.** Parser targets individual `` rows by exact first-column match. If TrendForce changes HTML structure, scraper breaks silently.
5. **IBKR data requires Mac Mini.** Options flow/GEX comes from yfinance (public). IBKR-specific data (positions, live orders) requires Mac Mini wrapper service.
6. **`fuser -k 5051/tcp` kills the exec session too.** The exec process matches the pattern. Use `kill $(pgrep -f 'python3.*dashboard.py' | grep -v )` or just `fuser -k` and restart.
7. **Notion integration doesn't have access to all pages.** Run Log and Shared Knowledge pages need to be shared with "Bowie Notebook" integration.