After storing data with files and JSON, it's time to exchange data with external services. The requests library turns HTTP calls into compact Python code, making it a staple for automation scripts. We'll cover installation, GET and POST flows, response validation, persistence, and ideas for scheduling.
Key terms
- HTTP: The protocol behind the web with methods such as GET and POST plus status codes.
- Status code: A numeric indicator of how the server handled the request; 200 is success, 4xx/5xx mean errors.
- Query parameter: Extra data appended to URLs via
?key=valueto refine a request. - Webhook: An HTTP request triggered by an event to deliver automatic notifications.
Core ideas
Study memo
- Time: 60 minutes
- Prereqs: Comfortable with file/JSON IO and basic function extractions
- Goal: Implement GET/POST calls, store responses, and handle failures gracefully
HTTP revolves around methods such as GET and POST plus status codes. When combined with requests, those rules become runnable automation.
Code examples
🌐 HTTP recap
- GET: Fetch data from the server.
- POST: Send data or create a resource on the server.
- Status codes: 200 signals success; 4xx or 5xx signal client/server errors.
Install requests
In your project folder:
uv add requests
Now you can import requests anywhere in the codebase.
Basic GET request
response = requests.get("https://api.github.com/repos/python/cpython")
response.raise_for_status()
data = response.json()
print(data["stargazers_count"])
raise_for_status() throws an exception for HTTP errors (400+), so you fail fast with a clear reason.
Query parameters and headers
params = {"q": "python", "per_page": 5}
headers = {"Accept": "application/vnd.github+json"}
response = requests.get("https://api.github.com/search/repositories", params=params, headers=headers)
response.raise_for_status()
items = response.json()["items"]
for repo in items:
print(repo["full_name"], repo["stargazers_count"])
Passing a params dict spares you from manual URL encoding.
POST requests with JSON bodies
payload = {"text": "Today's deploy succeeded"}
response = requests.post(
"https://httpbin.org/post",
json=payload,
)
response.raise_for_status()
print(response.json()["json"])
The json= argument serializes the payload and sets Content-Type: application/json automatically.
Validate responses and catch errors
Network hiccups and auth failures happen. Wrap them in try/except and specify a timeout.
from requests import RequestException
try:
response = requests.get("https://status.mathbong.com", timeout=5)
response.raise_for_status()
except RequestException as exc:
print("Monitoring failed", exc)
else:
print("Status page OK", response.text[:80])
Persist responses to disk
Combine what you learned about files and JSON with HTTP output.
from pathlib import Path
reports_dir = Path("reports")
reports_dir.mkdir(exist_ok=True)
response = requests.get("https://api.coindesk.com/v1/bpi/currentprice.json")
response.raise_for_status()
data = response.json()
file_path = reports_dir / "btc-price.json"
file_path.write_text(json.dumps(data, ensure_ascii=False, indent=2), encoding="utf-8")
print("Saved", file_path)
Visualize the flow to keep each step straight.
Diagrams make it obvious where to expand when requirements change.
Output expectations
Automation shines when you know what “healthy” logs look like.
:::terminal{title="Sample requests automation run", showFinalPrompt="false"}
[
{ "cmd": "uv run python scripts/weather.py", "output": "[INFO] GET https://api.weatherapi.com/v1/current.json?q=Seoul\n[INFO] status=200\n{\"location\":\"Seoul\",\"temp_c\":24.1,\"condition\":\"Cloudy\"}\n[INFO] data/weather-cache.json saved\n[INFO] Slack notification sent", "delay": 500 }
]
:::
- Check that the request URL and status appear first.
- Confirm that you print only the minimal data required.
- Ensure file writes and notifications show up at the end.
Scheduling ideas
"Scheduling" here simply means running the script at fixed intervals. Python does not provide the scheduler; you lean on the OS or hosting platform.
- Register
uv run scripts/check_status.pyin macOSlaunchdorcron. - Use GitHub Actions or a Railway Cron Job to run on a server.
- Send results to Notion or Slack so the team sees updates instantly.
Add logging to capture both the written files and webhook responses.
In practice: Weather summary notifier
from pathlib import Path
API_URL = "https://api.weatherapi.com/v1/current.json"
WEBHOOK_URL = "https://hooks.slack.com/services/..."
API_KEY = "YOUR_KEY"
cache_file = Path("data/weather-cache.json")
params = {"key": API_KEY, "q": "Seoul"}
response = requests.get(API_URL, params=params, timeout=4)
response.raise_for_status()
weather = response.json()
cache_file.write_text(json.dumps(weather, ensure_ascii=False, indent=2), encoding="utf-8")
message = {
"text": f"Current temp {weather['current']['temp_c']}℃ / feels like {weather['current']['feelslike_c']}℃"
}
notify = requests.post(WEBHOOK_URL, json=message, timeout=3)
notify.raise_for_status()
Fetch the API, cache the response, and alert Slack—the same pattern works for attendance, lunch menus, or school events.
Why it matters
API automation lets you pull school schedules, lunch data, or club attendance from external systems into your own workflow. With raise_for_status() and proper exceptions, you pinpoint failures quickly and keep operations calm.
Practice
- Follow along: Implement the
requests.getexample and watchresponse.raise_for_status()fail on purpose. - Extend: Adapt the weather notifier to another free API and store responses per date in
reports/. - Debug: Set
timeout=0.001to triggerRequestExceptionintentionally and inspect the error for recovery clues. - Done when: A single script runs GET → save to file → Slack POST with logs that prove each step.
Wrap up
requests is the simplest yet most powerful tool for calling APIs and automating responses. You can now fetch data, store it, and notify people conditionally. Next, you'll organize that data with classes and dataclasses.
💬 댓글
이 글에 대한 의견을 남겨주세요