shank commited on
Commit
dc7eb3f
·
1 Parent(s): 5dcd156

Revert "Fix: Dockerfile"

Browse files

This reverts commit 5dcd1567af5849e91b8f052e85eab37f8b44b03e.

Files changed (13) hide show
  1. Dockerfile +2 -3
  2. HANDOVER.md +211 -0
  3. get_logs.py +0 -11
  4. req_test.txt +0 -9
  5. req_test2.txt +0 -9
  6. req_test3.txt +0 -10
  7. requirements.txt +3 -4
  8. scratch/normalize_inputs.py +0 -52
  9. test_pip.sh +0 -6
  10. test_pip2.sh +0 -6
  11. test_pip3.sh +0 -6
  12. test_pip4.sh +0 -6
  13. uv.lock +0 -0
Dockerfile CHANGED
@@ -1,12 +1,11 @@
1
- FROM pytorch/pytorch:2.3.0-cuda12.1-cudnn8-runtime
2
 
3
  WORKDIR /app
4
 
5
  # Install curl for healthcheck
6
  RUN apt-get update && apt-get install -y curl && rm -rf /var/lib/apt/lists/*
7
 
8
- # torch + CUDA 12.1 + cuDNN 8 are already in the base image.
9
- # requirements.txt installs only the remaining app-level deps.
10
  COPY requirements.txt .
11
  RUN pip install --no-cache-dir -r requirements.txt
12
 
 
1
+ FROM python:3.10-slim
2
 
3
  WORKDIR /app
4
 
5
  # Install curl for healthcheck
6
  RUN apt-get update && apt-get install -y curl && rm -rf /var/lib/apt/lists/*
7
 
8
+ # Install dependencies first (layer cache optimization)
 
9
  COPY requirements.txt .
10
  RUN pip install --no-cache-dir -r requirements.txt
11
 
HANDOVER.md ADDED
@@ -0,0 +1,211 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # AgentDebuggerEnv — Project Handover
2
+
3
+ ## What This Project Is
4
+ A GRPO-trained LLM (Qwen2.5-Coder-7B-Instruct) that learns to debug Python code through
5
+ structured hypothesis-driven reasoning. Submitted to the Meta + PyTorch + HuggingFace OpenEnv Hackathon.
6
+
7
+ ---
8
+
9
+ ## Repo & Remotes
10
+
11
+ | Remote | URL |
12
+ |---|---|
13
+ | GitHub (source of truth) | https://github.com/shasshaank/meta_hackthon.git |
14
+ | HF Training Space | https://huggingface.co/spaces/shashaank0707/AgentDebugger-training-v2 |
15
+ | HF Trained Model | https://huggingface.co/shashaank0707/AgentDebugger-trained |
16
+
17
+ Push to GitHub first, then to HF Space if needed:
18
+ ```bash
19
+ git push origin main
20
+ git push space main --force # space remote = HF training space
21
+ ```
22
+
23
+ The `space` remote URL includes your HF token:
24
+ ```
25
+ https://shashaank0707:YOUR_HF_TOKEN@huggingface.co/spaces/shashaank0707/AgentDebugger-training-v2
26
+ ```
27
+
28
+ ---
29
+
30
+ ## Project Structure
31
+
32
+ ```
33
+ meta_hackathon/
34
+ ├── app.py # Gradio training monitor — launched by HF Space SDK
35
+ ├── training/
36
+ │ └── train_grpo.py # Main training script (GRPO via TRL)
37
+ ├── server/
38
+ │ ├── reward_calculator.py # Multi-component reward (format, hypothesis, fix, semantic)
39
+ │ ├── models.py # parse_agent_output() — parses structured LLM output
40
+ │ └── app.py # FastAPI server (for the inference/env Space, not training)
41
+ ├── data/
42
+ │ ├── bugs_tier1.jsonl # 9 easy bugs (used steps 0–150)
43
+ │ ├── bugs_tier2.jsonl # 31 medium bugs (added at step 150)
44
+ │ ├── bugs_tier3.jsonl # 21 hard bugs (added at step 350 → was 600)
45
+ │ └── generate_bugs.py # Script that generated the bug datasets
46
+ ├── requirements.txt # HF Space deps (gradio[oauth,mcp]==6.13.0, cu121 torch)
47
+ ├── requirements_kaggle.txt # Kaggle/RunPod deps (no torch pin, bitsandbytes==0.45.3)
48
+ ├── inference.py # Inference wrapper for evaluation
49
+ ├── Dockerfile # For the inference/env Space (not the training space)
50
+ └── README.md # HF Space config header (sdk: gradio, app_file: app.py)
51
+ ```
52
+
53
+ ---
54
+
55
+ ## Dependency Versions (locked — do not change without testing)
56
+
57
+ | Package | Version | Why pinned |
58
+ |---|---|---|
59
+ | `trl` | `0.14.0` | First version with `GRPOTrainer` + `GRPOConfig` |
60
+ | `pydantic` | `2.12.5` | Only version satisfying both gradio base AND gradio[mcp] constraints |
61
+ | `gradio` | `6.13.0[oauth,mcp]` | HF Space builder requires extras in one install pass |
62
+ | `bitsandbytes` | `0.45.3` (Kaggle) / `0.43.3` (HF Space cu121) | 0.45.3 has CUDA 12.x binaries; 0.43.3 works with cu121 |
63
+ | `transformers` | `4.46.3` | Tested with TRL 0.14.0 |
64
+ | `torch` | `2.5.1+cu121` (HF Space) / pre-installed (Kaggle) | |
65
+
66
+ **GRPOConfig param name:** `max_completion_length` (NOT `max_new_tokens` — that's the old name, breaks on 0.14.0)
67
+
68
+ ---
69
+
70
+ ## Training Script — Key Design Decisions
71
+
72
+ ### GPU Auto-Detection (train_grpo.py ~line 260)
73
+ The script detects GPU at runtime and sets all hyperparams automatically:
74
+
75
+ | GPU | dtype | batch | grad_accum | num_gen | max_comp | lora_r |
76
+ |---|---|---|---|---|---|---|
77
+ | A100 40GB+ | bfloat16 | 2 | 4 | 8 | 256 | 16 |
78
+ | V100 32GB | float16 | 1 | 8 | 6 | 220 | 12 |
79
+ | T4 / ≤16GB | float16 | 1 | 8 | 4 | 160 | 8 |
80
+
81
+ **Critical:** P100 is NOT supported — PyTorch 2.x dropped sm_60 support. Use T4 instead.
82
+
83
+ ### Curriculum
84
+ - Steps 0–150: Tier 1 bugs only (9 bugs)
85
+ - Steps 150–350: Tier 1 + Tier 2 (40 bugs)
86
+ - Steps 350+: All tiers (61 bugs)
87
+
88
+ ### Reward Components (server/reward_calculator.py)
89
+ | Component | Weight | What it measures |
90
+ |---|---|---|
91
+ | format_compliance | 0.10 | All 5 fields present (OBSERVATION/HYPOTHESIS/CONFIDENCE/ACTION/DETAIL) |
92
+ | hypothesis_quality | 0.20 | Length + references specific variable names |
93
+ | localization | 0.15 | Correct function/line identified |
94
+ | fix_quality | 0.35 | Tests pass on proposed fix |
95
+ | semantic_similarity | 0.10 | Similarity to canonical fix |
96
+ | efficiency_potential | 0.10 | Potential-based shaping (Ibrahim et al. 2024) |
97
+
98
+ ### Required Output Format
99
+ ```
100
+ OBSERVATION: [specific observations with line numbers]
101
+ HYPOTHESIS: [2+ sentences explaining root cause with variable names]
102
+ CONFIDENCE: [low | medium | high]
103
+ ACTION: [inspect_lines | run_tests | propose_fix | request_context | give_up]
104
+ DETAIL: [complete fixed function code if propose_fix, else details]
105
+ ```
106
+
107
+ ---
108
+
109
+ ## Running Training
110
+
111
+ ### On Kaggle (T4 — free):
112
+ ```python
113
+ # Cell 1 — install
114
+ !pip install -q wandb==0.18.7 datasets==3.0.2 transformers==4.46.3 \
115
+ accelerate==1.0.1 trl==0.14.0 bitsandbytes==0.45.3 peft==0.13.2
116
+
117
+ # Cell 2 — clone + secrets
118
+ from kaggle_secrets import UserSecretsClient
119
+ import os
120
+ secrets = UserSecretsClient()
121
+ os.environ["WANDB_API_KEY"] = secrets.get_secret("WANDB_API_KEY")
122
+ os.environ["HF_TOKEN"] = secrets.get_secret("HF_TOKEN")
123
+ !git clone https://github.com/shasshaank/meta_hackthon.git /kaggle/working/repo
124
+ %cd /kaggle/working/repo
125
+
126
+ # Cell 3 — train (streams output live)
127
+ import subprocess, sys
128
+ proc = subprocess.Popen(
129
+ [sys.executable, "training/train_grpo.py"],
130
+ stdout=subprocess.PIPE, stderr=subprocess.STDOUT,
131
+ text=True, bufsize=1, cwd="/kaggle/working/repo"
132
+ )
133
+ for line in proc.stdout:
134
+ print(line, end="", flush=True)
135
+ proc.wait()
136
+
137
+ # Cell 4 — save outputs after training
138
+ import shutil
139
+ shutil.copytree("/kaggle/working/repo/checkpoints", "/kaggle/working/checkpoints", dirs_exist_ok=True)
140
+ ```
141
+
142
+ **Kaggle secrets needed:** `WANDB_API_KEY`, `HF_TOKEN`
143
+ **Kaggle GPU:** T4 x1 (NOT P100 — incompatible with modern PyTorch)
144
+ **Expected time:** ~8–10 hours for 500 steps (default max_steps=500)
145
+
146
+ ### On RunPod (A100 — ~$1.09/hr):
147
+ ```bash
148
+ git clone https://github.com/shasshaank/meta_hackthon.git && cd meta_hackthon
149
+ pip install -q wandb==0.18.7 datasets==3.0.2 transformers==4.46.3 \
150
+ accelerate==1.0.1 trl==0.14.0 bitsandbytes==0.45.3 peft==0.13.2
151
+ WANDB_API_KEY=xxx HF_TOKEN=xxx python training/train_grpo.py
152
+ ```
153
+ **Expected time:** ~3–4 hours for 1000 steps on A100 40GB
154
+
155
+ ### Resume from checkpoint:
156
+ ```bash
157
+ python training/train_grpo.py --resume ./checkpoints/checkpoint-400
158
+ ```
159
+
160
+ ### Local sanity check (no GPU):
161
+ ```bash
162
+ python training/train_grpo.py --test-local
163
+ ```
164
+
165
+ ---
166
+
167
+ ## HF Space Setup (training monitor)
168
+
169
+ The training Space (`AgentDebugger-training-v2`) is a Gradio app that:
170
+ 1. On startup, spawns `training/train_grpo.py` in a background thread
171
+ 2. Shows a live training log in the UI, auto-refreshing every 30s
172
+
173
+ **Required Space secrets:**
174
+ - `WANDB_API_KEY`
175
+ - `HF_TOKEN`
176
+
177
+ **Push to Space:**
178
+ ```bash
179
+ git remote set-url space https://shashaank0707:YOUR_HF_TOKEN@huggingface.co/spaces/shashaank0707/AgentDebugger-training-v2
180
+ git push space main --force
181
+ ```
182
+
183
+ ---
184
+
185
+ ## Known Issues Fixed (do not revert)
186
+
187
+ | Issue | Fix |
188
+ |---|---|
189
+ | `ImportError: cannot import name 'GRPOTrainer'` | `trl==0.12.2` → `trl==0.14.0` |
190
+ | `TypeError: GRPOConfig got unexpected keyword 'max_new_tokens'` | renamed to `max_completion_length` |
191
+ | `pydantic` conflict with `gradio[mcp]` | `pydantic==2.10.6` → `2.12.5` |
192
+ | `P100 not supported by PyTorch 2.x` | Switch to T4 on Kaggle |
193
+ | `bitsandbytes CUDA binary not found` | `bitsandbytes==0.43.3` → `0.45.3` on Kaggle |
194
+ | `unsloth` CUDA driver crash on HF A100 | Replaced with `bitsandbytes + peft` |
195
+ | `gradio every=` deprecation | Replaced with `gr.Timer(value=30)` |
196
+
197
+ ---
198
+
199
+ ## W&B Dashboard
200
+ https://wandb.ai/shashaankjain07-keshav-memorial-college-of-law/AgentDebuggerEnv
201
+
202
+ Training runs appear here automatically when `WANDB_API_KEY` is set.
203
+
204
+ ---
205
+
206
+ ## What's Left To Do
207
+
208
+ - [ ] **Finish training** — 500–1000 steps, model pushes to HF Hub automatically on completion
209
+ - [ ] **Verify trained model** — run `inference.py` against the trained model checkpoint
210
+ - [ ] **Update HF Space README** — change curriculum description to match actual step boundaries (150/350)
211
+ - [ ] **Submission** — ensure the inference/env Space (`AgentDebugger-env`) is live and healthy for judging
get_logs.py DELETED
@@ -1,11 +0,0 @@
1
- import urllib.request
2
- import json
3
- import sys
4
-
5
- url = "https://huggingface.co/api/spaces/shashaank0707/AgentDebugger-training-v3"
6
- req = urllib.request.Request(url)
7
- with urllib.request.urlopen(req) as response:
8
- data = json.loads(response.read().decode())
9
-
10
- # Not sure where the build logs are in the API, but I can check the state
11
- print(data.get('runtime', {}).get('stage'))
 
 
 
 
 
 
 
 
 
 
 
 
req_test.txt DELETED
@@ -1,9 +0,0 @@
1
- gradio[oauth,mcp]==6.13.0
2
- pydantic==2.12.5
3
- wandb==0.18.7
4
- datasets==3.0.2
5
- transformers==4.46.3
6
- accelerate==1.0.1
7
- trl==0.14.0
8
- mergekit
9
- peft==0.13.2
 
 
 
 
 
 
 
 
 
 
req_test2.txt DELETED
@@ -1,9 +0,0 @@
1
- gradio[oauth,mcp]==6.13.0
2
- pydantic
3
- wandb
4
- datasets
5
- transformers
6
- accelerate
7
- trl==0.14.0
8
- mergekit
9
- peft
 
 
 
 
 
 
 
 
 
 
req_test3.txt DELETED
@@ -1,10 +0,0 @@
1
- click>=8.1.0
2
- gradio[oauth,mcp]==6.13.0
3
- pydantic
4
- wandb
5
- datasets
6
- transformers
7
- accelerate
8
- trl==0.14.0
9
- mergekit
10
- peft
 
 
 
 
 
 
 
 
 
 
 
requirements.txt CHANGED
@@ -1,4 +1,3 @@
1
- # torch, CUDA 12.1, and cuDNN 8 are pre-installed in the base image:
2
- # pytorch/pytorch:2.3.0-cuda12.1-cudnn8-runtime
3
- # Do NOT add torch here — pip would resolve to the CPU wheel from default PyPI
4
- # and overwrite the CUDA-enabled torch from the base image.
 
1
+ # torch must be installed at build time (CUDA wheel is ~2GB, too slow at runtime)
2
+ # Everything else is installed at runtime in training/train_grpo.py
3
+ torch
 
scratch/normalize_inputs.py DELETED
@@ -1,52 +0,0 @@
1
- import os
2
- import sys
3
- import pprint
4
-
5
- sys.path.append(os.path.abspath('.'))
6
- from data.generate_bugs import TIER1_BUGS, TIER2_BUGS, TIER3_BUGS
7
-
8
- def normalize_test_cases(bugs):
9
- for b in bugs:
10
- for t in b.get("test_cases", []):
11
- inp = t["input"]
12
- if isinstance(inp, (list, tuple)):
13
- t["input"] = list(inp)
14
- else:
15
- t["input"] = [inp]
16
-
17
- normalize_test_cases(TIER1_BUGS)
18
- normalize_test_cases(TIER2_BUGS)
19
- normalize_test_cases(TIER3_BUGS)
20
-
21
- def dump_var(f, name, val):
22
- f.write(f'{name} = ')
23
- f.write(pprint.pformat(val, sort_dicts=False, width=120))
24
- f.write('\n\n')
25
-
26
- with open("data/generate_bugs.py", "w", encoding="utf-8") as f:
27
- f.write('"""\nAgentDebuggerEnv - Bug Dataset Generator\n\n')
28
- f.write('Generates three tiers of buggy Python functions for curriculum learning:\n')
29
- f.write(' Tier 1 (easy): Off-by-one errors, wrong operators, simple logic inversions\n')
30
- f.write(' Tier 2 (medium): Incorrect algorithm logic, wrong variable references, subtle type errors\n')
31
- f.write(' Tier 3 (hard): Multi-bug interactions, concurrency, edge-case-only failures\n\n')
32
- f.write('Usage:\n python data/generate_bugs.py\n\n')
33
- f.write('Outputs:\n data/bugs_tier1.jsonl (~40 bugs)\n data/bugs_tier2.jsonl (~30 bugs)\n data/bugs_tier3.jsonl (~20 bugs)\n"""\n\n')
34
- f.write('import json\nimport os\n\n')
35
-
36
- dump_var(f, 'TIER1_BUGS', TIER1_BUGS)
37
- dump_var(f, 'TIER2_BUGS', TIER2_BUGS)
38
- dump_var(f, 'TIER3_BUGS', TIER3_BUGS)
39
-
40
- f.write('def write_jsonl(bugs: list, path: str):\n')
41
- f.write(' with open(path, "w") as f:\n')
42
- f.write(' for bug in bugs:\n')
43
- f.write(' f.write(json.dumps(bug) + "\\n")\n\n')
44
- f.write('if __name__ == "__main__":\n')
45
- f.write(' os.makedirs("data", exist_ok=True)\n')
46
- f.write(' write_jsonl(TIER1_BUGS, "data/bugs_tier1.jsonl")\n')
47
- f.write(' write_jsonl(TIER2_BUGS, "data/bugs_tier2.jsonl")\n')
48
- f.write(' write_jsonl(TIER3_BUGS, "data/bugs_tier3.jsonl")\n')
49
- f.write(' print(f"Tier 1: {len(TIER1_BUGS)}, Tier 2: {len(TIER2_BUGS)}, Tier 3: {len(TIER3_BUGS)}")\n')
50
- f.write(' print("\\nDone. Run training/train_grpo.py to start training.")\n')
51
-
52
- print("Normalization applied successfully.")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
test_pip.sh DELETED
@@ -1,6 +0,0 @@
1
- #!/bin/bash
2
- python3 -m venv venv_test
3
- source venv_test/bin/activate
4
- pip install pip -U
5
- pip install datasets "huggingface-hub>=0.30" "hf-transfer>=0.1.4" "protobuf<4" "click<8.1"
6
- pip install -r requirements.txt gradio[oauth,mcp]==6.13.0 "uvicorn>=0.14.0" "websockets>=10.4" spaces
 
 
 
 
 
 
 
test_pip2.sh DELETED
@@ -1,6 +0,0 @@
1
- #!/bin/bash
2
- python3 -m venv venv_test2
3
- source venv_test2/bin/activate
4
- pip install pip -U
5
- pip install datasets "huggingface-hub>=0.30" "hf-transfer>=0.1.4" "protobuf<4" "click<8.1"
6
- pip install -r req_test.txt gradio[oauth,mcp]==6.13.0 "uvicorn>=0.14.0" "websockets>=10.4" spaces
 
 
 
 
 
 
 
test_pip3.sh DELETED
@@ -1,6 +0,0 @@
1
- #!/bin/bash
2
- python3 -m venv venv_test3
3
- source venv_test3/bin/activate
4
- pip install pip -U
5
- pip install datasets "huggingface-hub>=0.30" "hf-transfer>=0.1.4" "protobuf<4" "click<8.1"
6
- pip install -r req_test2.txt spaces
 
 
 
 
 
 
 
test_pip4.sh DELETED
@@ -1,6 +0,0 @@
1
- #!/bin/bash
2
- python3 -m venv venv_test4
3
- source venv_test4/bin/activate
4
- pip install pip -U
5
- pip install datasets "huggingface-hub>=0.30" "hf-transfer>=0.1.4" "protobuf<4" "click<8.1"
6
- pip install -r req_test3.txt spaces
 
 
 
 
 
 
 
uv.lock CHANGED
The diff for this file is too large to render. See raw diff