Building Real-Time Slack Alerts for Job Applications with AWS Lambda, SQS, and Bedrock

🛠️ This is a technical deep-dive for engineers and DevOps practitioners. If you're a non-technical reader looking for a high-level overview of how this feature works, check out How We Built Real-Time Slack Alerts for Job Applications (Powered by AI) instead.
At atWare Vietnam, hiring is a core part of how we grow. But as job applications pile up, the hiring team was losing time just keeping up with what had arrived. We built a real-time notification pipeline on AWS — so the moment a candidate submits their application, a Slack alert lands in the hiring channel within 30 seconds.
The Problem
Every time we open a new job position, applications start arriving as PDF files. There was no real-time visibility — the hiring team had to manually check a shared inbox to know if anything had come in. We wanted an event-driven system that would notify the team instantly and include a brief AI-generated summary, so they could act fast without opening attachments.
The Solution: An Event-Driven AI Pipeline
We designed a fully serverless, event-driven system using:
- Amazon S3 — storage for incoming applications and generated summaries
- Amazon SQS — decouples the S3 event from the Lambda processor
- AWS Lambda — runs the processing logic
- Amazon Bedrock — AWS's fully managed AI service, used to generate a brief application summary without managing any model infrastructure
- AWS SSM Parameter Store — stores the AI prompt and model configuration securely
Here's the overall architecture:
Candidate uploads application (PDF)
↓
S3 Bucket (resumes/{job_slug}/{filename}.pdf)
↓ S3 Event Notification
SQS Queue
↓ Lambda Trigger
Lambda Function
↓
Amazon Bedrock
↓
S3 Bucket → report .md + metadata .json
↓
Slack Notification (cv-slack-notification module)
Step 1: S3 Triggers SQS on New Application Upload
We configure an S3 bucket notification that fires whenever a new .pdf file is uploaded under the resumes/ prefix. The notification is sent to an SQS queue rather than directly triggering Lambda — this gives us decoupling, retry resilience, and a dead-letter queue (DLQ) for failed messages.
resource "aws_s3_bucket_notification" "bucket_notification" {
bucket = var.s3_bucket_name
queue {
queue_arn = aws_sqs_queue.cv_notification_queue.arn
events = ["s3:ObjectCreated:*"]
filter_prefix = "resumes/"
filter_suffix = ".pdf"
}
}
The SQS queue is configured with:
visibility_timeout = 240s(2× Lambda timeout) to prevent duplicate processing- A Dead Letter Queue (DLQ) — after 3 failed attempts, the message moves to the DLQ for investigation
resource "aws_sqs_queue" "cv_notification_queue" {
name = "atware-sqs-cv-notification"
visibility_timeout_seconds = 240 # 2× Lambda timeout → prevents duplicate processing
redrive_policy = jsonencode({
deadLetterTargetArn = aws_sqs_queue.cv_notification_dlq.arn
maxReceiveCount = 3 # move to DLQ after 3 failures
})
}
Step 2: Lambda Reads the CV and Calls Bedrock
The Lambda function (handler.py) is triggered by the SQS event. For each message, it:
- Parses the S3 bucket name and file key from the SQS payload
- Downloads the PDF bytes from S3
- Extracts the
job_slugfrom the file path (e.g.resumes/backend-engineer/john.pdf→job_slug = "backend-engineer") - Calls
get_cv_summary()— which sends the PDF to Claude via Amazon Bedrock
# file_key = "resumes/backend-engineer/john-doe.pdf"
bucket_name = record["s3"]["bucket"]["name"]
file_key = urllib.parse.unquote_plus(record["s3"]["object"]["key"])
pdf_bytes = s3_client.get_object(Bucket=bucket_name, Key=file_key)["Body"].read()
job_slug = file_key.split("/")[-2] # → "backend-engineer"
cv_summary = get_cv_summary(pdf_bytes, region, account_id, job_slug)
After analysis, the Lambda uploads two files back to S3:
cv-notification/reports/{job_slug}/{filename}.md— a Markdown summary of the applicationcv-notification/metadata/{job_slug}/{filename}.json— structured metadata with the candidate's name and summary
The report file is immediately tagged with is_notified=false, which signals our Slack notification module to send an alert to the hiring channel.
Step 3: Amazon Bedrock Generates the Application Summary
The heart of the system is bedrock.py. Instead of hosting or managing our own model, we use Amazon Bedrock — AWS's fully managed service that gives access to foundation models (including Claude from Anthropic) via a simple API call. No infrastructure to provision, no model weights to serve, no GPU fleet to manage. You call the API, you get a response.
We use the Bedrock Converse API with tool use enabled — meaning the model can actively fetch additional information during the analysis (more on that in Step 4).
The prompt and model ID are stored in SSM Parameter Store and injected at runtime, making them easy to swap without redeploying the Lambda.
# model_id = "arn:aws:bedrock:ap-northeast-1:123456789012:inference-profile/anthropic.claude-3-5-sonnet-20241022-v2:0"
messages = [{
"role": "user",
"content": [
{"document": {"name": "cv", "format": "pdf", "source": {"bytes": pdf_bytes}}},
{"text": prompt}
]
}]
We run a conversation loop (up to 8 rounds) to allow Claude to call tools and receive results before producing its final assessment:
for _ in range(8):
response = bedrock_client.converse(modelId=model_id, messages=messages, toolConfig=TOOL_CONFIG)
output_msg = response["output"]["message"]
messages.append(output_msg)
if response["stopReason"] == "tool_use":
tool_results = execute_tools(output_msg["content"]) # WebFetch, GitHubFetch, etc.
messages.append({"role": "user", "content": tool_results})
continue
return _validate_cv_summary(output_msg, job_slug) # stopReason == "end_turn"
Step 4: Tool Use — Bedrock Reaches Beyond the PDF
One of Bedrock's most useful features is tool use (also called function calling). You define a set of tools the model can invoke, and Bedrock handles the back-and-forth conversation loop until it has everything it needs to generate the final response.
We provide the model with 5 tools to enrich the summary beyond what's written in the PDF:
- WebFetch — Fetch any public URL (portfolio, company page, etc.)
- GitHubUserFetch — Get GitHub profile metadata via the GitHub API
- GitHubReposFetch — List a candidate's public repositories
- GitHubRepoFetch — Get details about a specific repository
- SmartFetch — Auto-detect if a URL is a GitHub profile/repo or a plain webpage and call the right tool
For example, if the application includes a GitHub link, the model will call SmartFetch to pull public repo metadata — languages used, recent activity, project descriptions — and incorporate that context into the summary sent to Slack.
Step 5: Validating the AI Output
Bedrock is expected to return a structured JSON in its final message. We validate it with _validate_cv_summary() to ensure correctness before saving:
def _validate_cv_summary(data: dict, job_slug: str) -> dict:
for field in ["job_slug", "name", "report_markdown"]:
if not data.get(field, "").strip():
raise ValueError(f"Missing or empty field: {field}")
return data
This strict validation prevents malformed data from reaching the Slack notification step even if the model returns an unexpected response format.
Infrastructure as Code with Terraform
The entire system is packaged as a reusable Terraform module at infra/modules/cv-notification. Deploying it to a new environment is as simple as:
module "cv_notification" {
source = "../modules/cv-notification"
project_name = var.project_name
s3_bucket_name = aws_s3_bucket.bucket.id
s3_report_prefix = "cv-notification/reports"
s3_metadata_prefix = "cv-notification/metadata"
}
The Lambda function code is automatically zipped from the handlers/ directory and deployed with each terraform apply.
Results
Since deploying this pipeline, the hiring team is notified the moment an application lands — no more inbox polling. The system:
- Delivers a Slack notification with an AI-generated summary within ~30 seconds of submission
- Fetches external profiles (GitHub, portfolio) to enrich the summary
- Handles failures gracefully via SQS DLQ without losing any application
Conclusion
Building this pipeline was a great exercise in combining event-driven AWS architecture with Amazon Bedrock. The key design decisions — using SQS for resilience, SSM for prompt management, and Bedrock's tool use for web enrichment — make the system both robust and flexible. And because Bedrock is a fully managed service, there's no model infrastructure to operate: you define the prompt, call the API, and focus on your application logic.
If you're dealing with a similar challenge, or just want to explore how Amazon Bedrock's Converse API and tool use work in a real production setting, feel free to reach out at contact@atware.asia.
Happy building! 🚀