Ssot Applications Templates Defaults
Ssot Applications Templates Defaults
Source: ssot-applications-templates-defaults.md (ingested 2026-03-28)
SSOT – Applications, Templates & Defaults
Last major update: 2026-02-28
1. Default application settings
-
Target volume per week: 15–20 applications
-
Priority boards/tools:
- LinkedIn (primary, Easy Apply)
- Indeed (secondary, Easy Apply)
- Handshake (tertiary, Berkeley alumni network)
- Company portals (selective, manual review only)
-
Default resume variant per role type:
- LLM / AI infra:
Resume_MA_2026.pdf - Research scientist:
Resume_MA_2026.pdf - Scientific ML / physics-ML:
Resume_MA_2026_FEB.pdf - Computational / HPC / quantum:
Resume_MA_2026_FEB.pdf - Full-stack / infra:
Resume_MA_2026.pdf
- LLM / AI infra:
-
Rules for "auto-apply" vs "manual review":
- Auto-apply when: Easy Apply available, role is senior/staff/principal level, TC >= $200K, matches AI/ML/HPC/quantum domains, remote or Bay Area
- Manual review when: Requires cover letter; TC $150K–$200K (evaluate equity/scope); national labs or FAANG (tailor application); founding/co-founder roles
- Never apply when: Requires U.S. citizenship or active clearance; pure biotech/pharma (4 rejections); heavy hardware/semiconductor (3 rejections); TC < $150K with no equity; requires traditional whiteboard/leetcode interviews
2. Easy-apply defaults
-
Desired compensation:
- Bay Area senior/staff AI/ML: $250,000 base minimum; target $300K–$350K TC
- Remote US: $250,000 base minimum; target $280K–$320K TC
- Early-stage startup with equity: $200K+ base acceptable with meaningful equity (1%+ for seed/Series A)
-
Earliest start date: Immediately available
-
Work authorization text: "Authorized to work in the US on STEM-OPT (3-year extension, approximately 2.5 years remaining). No future sponsorship needed. Have been in the U.S. for 7+ years; well-positioned for EB-1A."
-
Willingness to relocate / remote preferences: "Open to remote, hybrid, or on-site in the SF Bay Area. Willing to relocate to Seattle, Boston, NYC, or other major US tech hubs with relocation assistance."
-
Preferred locations list (copy-pastable): San Francisco, Berkeley, Palo Alto, San Jose, Mountain View, Menlo Park, Redwood City, Seattle, Boston, New York, Remote (US)
-
"How did you hear about this role?" defaults:
- Default: "LinkedIn" or "Indeed" (match the platform)
- Variant for referrals: "Referred by [Name] — [connection context]"
- Variant for events/conferences: "Met your team at [Event] / saw the posting through [Channel]"
Form field quick-reference (copy-paste)
| Field | Value | |---|---| | Full Name | Meshal Alawein | | Email | contact@meshal.ai | | Phone | +1-415-660-6676 | | Location | Berkeley, CA | | LinkedIn | linkedin.com/in/alawein | | GitHub | github.com/alawein | | Portfolio | meshal.ai | | Google Scholar | scholar.google.com/citations?user=IBE6GQAAAAJ | | Years of Python | 11 | | Years of PyTorch | 4 | | Years of C++ | 14 | | Years of ML/AI | 6 | | Highest education | PhD, EECS, UC Berkeley (2025) | | Visa/sponsorship | Authorized, no sponsorship needed | | Veteran status | Not a protected veteran | | Disability | No disability | | Gender | Male | | Race/Ethnicity | Decline to self-identify |
3. Common long-form questions
"Why are you interested in this role?"
Generic template: "I'm drawn to this role because it sits at the intersection of research depth and production engineering — exactly where I operate best. My background spans LLM training pipelines (SFT/RLHF), large-scale scientific computing on HPC infrastructure, and full-stack AI platform development. I'm looking for a position where I can apply that breadth to high-impact technical challenges while continuing to grow as a senior engineer."
LLM / AI infra variant: "This role aligns directly with my experience building end-to-end LLM fine-tuning and evaluation systems. At Turing Enterprises, I designed SFT/RLHF pipelines with automated evaluation harnesses for domain-specific reasoning. At Morphism Systems, I architected governed LLM agent infrastructure with structured outputs and quality enforcement. I'm excited to bring that production LLM systems expertise to your team."
Scientific ML / physics-ML variant: "My PhD work at UC Berkeley and Lawrence Berkeley National Lab focused on integrating ML surrogate models with electronic structure simulations, achieving 70% runtime reduction and $160K annual savings across 24,000 CPU-hours. This role's focus on scientific ML is a natural extension of that work, and I'm eager to apply my expertise in physics-informed ML and HPC optimization at production scale."
"Why do you want to work here?"
Generic template: "I've been following [Company]'s work in [specific area], and I'm impressed by the technical ambition and the caliber of the team. I'm looking for a place where my combination of deep research experience and production engineering discipline creates outsized value, and your approach to [specific product/problem] suggests that's exactly the kind of challenge you're solving."
Startup variant: "What excites me about [Company] is the opportunity to shape technical direction at a formative stage. As a founder of Morphism Systems, I've built production AI infrastructure from zero — 52 TypeScript packages, 9 platforms, governed LLM agent systems. I thrive in environments where I can wear multiple hats and make architectural decisions that scale."
Big-tech / lab variant: "I'm drawn to [Company]'s investment in [specific research area] and the resources to pursue it at scale. My research background (16 publications, PhD in quantum materials + ML) combined with production engineering experience (Docker/Kubernetes, CI/CD, MLOps) positions me to bridge research prototypes and deployment-grade systems — which I understand is a key challenge for your team."
"Describe a project you're proud of."
DFT/HPC workflow story: "At UC Berkeley and Lawrence Berkeley National Lab, I engineered a production DFT/HPC workflow managing 2,300 computational jobs and 24,000 CPU-hours. The challenge was that traditional brute-force DFT screening was prohibitively expensive. I built an end-to-end pipeline with monitoring, automated error handling, and regression testing, then integrated ML surrogate models to predict electronic structure properties. This reduced runtime by 70%, saved $160K annually, and accelerated materials discovery by 10x."
SFT/RLHF / LLM pipeline story: "At Turing Enterprises, I designed end-to-end LLM fine-tuning systems for domain-specific reasoning in computational physics. I built synthetic data generation frameworks with automated validation, developed benchmark suites and evaluation harnesses measuring reasoning quality and structured outputs, and optimized distributed PyTorch training workflows. The result was a repeatable evaluation process that reduced manual testing by 80% and improved model reliability across iterations."
Morphism Systems infra story: "As founder of Morphism Systems, I architected a scalable AI development framework from scratch: a 52-package TypeScript monorepo with governed LLM agent infrastructure, structured outputs, automated regression testing, and monitoring dashboards. I built 9 full-stack production platforms and created a unified documentation ingestion pipeline processing 500+ pages. The key insight was treating AI agent outputs like production code — with quality gates, validation layers, and reproducibility guarantees."
"Describe a time you solved a difficult technical problem."
HPC bottleneck + cost reduction story: "During my PhD, our DFT screening pipeline was consuming $200K+/year in compute with unpredictable failures and no cost visibility. I built production-grade monitoring and automated error-handling, then integrated ML surrogate models to bypass expensive calculations where confidence was high. The approach required balancing scientific accuracy with computational efficiency — if the surrogate was wrong, we'd miss valid candidate materials. I implemented regression testing and confidence calibration to maintain reproducibility while achieving 70% runtime reduction and $160K annual savings."
RLHF / evaluation harness debugging story: "While building evaluation infrastructure at Turing Enterprises, I discovered that our domain-specific LLMs were passing generic NLP benchmarks but failing on actual physics reasoning tasks. Standard evaluation metrics didn't capture the nuance of structured scientific outputs. I designed a custom benchmark suite with synthetic data generation and domain-specific validators, which revealed systematic failure modes invisible to generic metrics. This reframed how the team approached model iteration and reduced false-positive evaluation results significantly."
"What are your long-term goals?"
Individual contributor / staff+ track: "I want to grow into a staff+ engineering role where I can set technical direction for AI/ML infrastructure while staying deeply hands-on. I'm most energized when I can define system architecture, mentor engineers, and drive projects from research insight to production impact. Long-term, I aim to be the kind of technical leader who shapes how organizations build and deploy AI systems reliably."
Founder / product-builder framing: "I'm driven by building systems that bridge cutting-edge research and production deployment. Long-term, I want to lead the development of AI infrastructure that makes sophisticated ML accessible and reliable at scale — whether that's within a company's platform team or through products I help create. My experience founding Morphism Systems taught me that the biggest leverage comes from building reusable, governed primitives that accelerate everything downstream."
"Anything else we should know?"
Short "glue text" for most forms: "I'm immediately available, based in Berkeley/SF, and authorized to work in the U.S. on STEM-OPT with no sponsorship needed. Happy to share portfolio work, publications, or discuss any of my projects in more detail."
Optional "research + infra hybrid" variant: "My unusual strength is operating across the full stack from quantum materials theory to production Kubernetes deployments. I have 16 peer-reviewed publications and have also shipped 9 production platforms. I'm looking for a role that values both research depth and engineering rigor rather than siloing them."
4. Cover letter & outreach templates
4.1 Master cover letter skeleton
Dear [Hiring Manager / Team],
I'm writing to express my interest in the [Role] position at [Company]. I'm a
PhD-trained AI Research Engineer with expertise in [2-3 relevant areas from:
LLM training pipelines, scientific ML, HPC optimization, quantum computing,
full-stack AI platforms].
[Company] caught my attention because of [1-2 specific things: recent paper,
product launch, technical blog, mission alignment]. My work at
[Berkeley/Turing/Morphism] directly aligns with your team's challenges in
[specific area from JD].
Two projects that demonstrate my fit:
- [Project 1: pick from DFT/HPC pipeline ($160K savings, 70% speedup),
SFT/RLHF pipelines, or Morphism Systems (52 packages, 9 platforms)]
- [Project 2: different from above, matched to JD requirements]
I'm immediately available, based in the SF Bay Area, and authorized to work
in the U.S. I'd welcome the opportunity to discuss how my background in
[key skill] can contribute to [Company]'s goals.
Best regards,
Meshal Alawein
contact@meshal.ai | meshal.ai | github.com/alawein
4.2 Cold email / DM templates
Recruiter outreach:
Hi [Name], I noticed [Company] is hiring for [Role] — it's a strong match for
my background. I'm a PhD (UC Berkeley, EECS) with expertise in [2 skills from
JD]. I've built production LLM pipelines, optimized HPC workflows saving
$160K/year, and shipped 9 AI platforms as a founder. I'm targeting $250K+ base
and am immediately available. Happy to share my resume if there's interest.
— Meshal
Hiring manager outreach:
Hi [Name], I came across the [Role] opening on your team and wanted to reach
out directly. My background maps closely to what you're building:
- Built end-to-end SFT/RLHF pipelines with automated evaluation harnesses
- Engineered DFT/HPC workflows managing 24,000 CPU-hours with 70% runtime
reduction
- Architected a 52-package AI platform framework (Morphism Systems) with
governed agent infrastructure
I'd love to learn more about [specific challenge/product].
Portfolio: meshal.ai — Meshal
"Reconnect" / referral ask (warm contact):
Hi [Name], hope you're doing well. I recently completed my PhD at Berkeley and
am actively looking for senior AI/ML engineering roles. I saw that [Company] is
hiring for [Role] and thought of our [connection context]. Would you be open to
a quick chat or referral? I'd really appreciate it. — Meshal
"Reconnect" / referral ask (weak-tie / alumni):
Hi [Name], we connected through [Berkeley/KAUST/conference]. I'm now looking
for senior AI/ML roles after finishing my PhD. If you know of any teams hiring
for [LLM infra / scientific ML / HPC], I'd be grateful for a pointer. Happy to
return the favor anytime. — Meshal
4.3 Follow-up email templates
After application:
Hi [Name], I recently applied for the [Role] position at [Company] and wanted
to follow up briefly. I'm particularly excited about [specific aspect of the
role/company]. My background in [relevant skill] and experience [brief
achievement] make me confident I'd contribute meaningfully to your team. I'd
welcome the opportunity to discuss further.
— Meshal Alawein, contact@meshal.ai
After interview:
Hi [Name], thank you for taking the time to speak with me about the [Role]
position. I especially enjoyed our discussion about [specific topic]. It
reinforced my interest in [Company] — particularly [specific problem/project
I'd work on]. I'm excited about the possibility of contributing to
[team/product]. Please don't hesitate to reach out if there's anything else I
can provide. — Meshal
After no response:
Hi [Name], I wanted to follow up on my application for the [Role] position.
Since applying, I've [brief new signal: published a paper, shipped a feature,
completed a project]. I remain very interested in the opportunity and would
love to discuss how my background in [skill] could support your team's work.
— Meshal
5. Screening & recruiter call script
Brief intro pitch: "I'm Meshal Alawein — PhD from UC Berkeley in computational physics and ML systems. I specialize in LLM training infrastructure — SFT, RLHF, evaluation harnesses — and large-scale scientific computing. I recently founded Morphism Systems, building governed AI agent platforms. I'm targeting senior or staff-level roles in AI/ML engineering or research."
Key points to hit:
- Core skills: LLM infra (SFT/RLHF/evaluation), DFT/HPC optimization, full-stack AI platforms
- Target roles: Senior/Staff AI Research Engineer, ML Systems Engineer, Research Scientist
- Timeline: Immediately available, no notice period
- Comp band: "$250K+ base, targeting $300K–$350K total comp. Flexible on structure (equity vs. cash) for the right scope and mission."
3–5 questions you always want to ask:
- "What does the team structure look like, and where does this role sit in terms of technical ownership?"
- "What are the biggest technical challenges the team is facing right now?"
- "How does the team balance research exploration with production delivery?"
- "What does success look like in the first 6 months for this role?"
- "What's the interview process from here?"
Topics to avoid on first call:
- Detailed visa/immigration mechanics (just say "authorized, no sponsorship needed")
- Specific competing offers or company names
- Negative comments about previous employers or roles
- Deep compensation negotiation (state band, defer specifics to later stages)
6. Negotiation notes
Your internal "walk-away" numbers:
- Standard senior/staff IC role (Bay Area): $250K base minimum, $300K+ TC target
- Remote US role: $250K base minimum, $280K+ TC target
- Startup with significant equity: $200K base minimum with 1%+ equity (seed/Series A) or 0.5%+ (Series B+)
- Principal-level / technical co-founder: Flexible on cash ($180K+ acceptable) if equity is 1.5%–3% with clear vesting and authority
Trade-offs you'll accept:
- Title vs comp: Will accept lower title (Senior instead of Staff) if TC is $300K+ and scope matches
- Equity vs cash: Will take $50K–$100K below market base for meaningful equity at early-stage companies with real product-market traction
- Remote vs on-site/hybrid: Will accept on-site/hybrid for the right role; no discount on comp for remote
Phrases/scripts you like:
-
For stating expectations: "I'm targeting a base of $250K+, with total compensation in the $300K–$350K range, consistent with senior/staff AI research engineering roles in the Bay Area."
-
For countering: "I appreciate the offer. Based on my background and the scope of this role, I was expecting something closer to [target]. Is there flexibility in [base/equity/sign-on] to bridge the gap?"
-
For stalling deadlines / waiting on other offers: "I'm very interested and want to give this the consideration it deserves. I have a few other conversations in final stages — could we extend the deadline by [one week / two weeks] so I can make a fully informed decision? I want to commit wholeheartedly."
7. Saved snippets
Bullets for each flagship project
DFT/HPC / quantum materials workflow (short):
- Engineered production DFT/HPC pipeline managing 2,300 jobs and 24,000 CPU-hours; reduced runtime 70% and saved $160K/year via ML surrogate integration
DFT/HPC / quantum materials workflow (long):
- Engineered production-grade DFT/HPC workflows managing 24,000 CPU-hours with monitoring, automated error handling, and regression testing
- Integrated ML surrogate models with electronic structure simulations to accelerate materials screening 10x
- Discovered tunable flat-band formation in strained 2D semiconductors for quantum photonics applications
- Open-source tools (QMatSim, SciComp) adopted by 10+ research groups, 200+ GitHub stars
RLHF / LLM training & evaluation (short):
- Built end-to-end SFT/RLHF pipelines with synthetic data generation, reward modeling, and automated evaluation harnesses; reduced manual testing 80%
RLHF / LLM training & evaluation (long):
- Designed SFT/RLHF pipelines for domain-specific LLMs at Turing Enterprises, including synthetic data generation and automated validation
- Developed benchmark suites measuring reasoning quality, structured output compliance, and regression detection
- Optimized distributed PyTorch training workflows, reducing instability and improving reproducibility
- Established repeatable evaluation process, improving model reliability across iterations
Morphism Systems infra / agents (short):
- Founded and built Morphism Systems: 52-package TypeScript monorepo, 9 production platforms, governed LLM agent infrastructure with evaluation-driven iteration
Morphism Systems infra / agents (long):
- Architected scalable AI development framework with 52 TypeScript packages and 85 automation scripts
- Built deployment-grade LLM agent systems with structured outputs, automated regression testing, and monitoring dashboards
- Created unified documentation ingestion pipeline processing 500+ pages with quality enforcement gates
- Developed 9 full-stack production platforms (SimCore, REPZ, QMLab, etc.) using React/TypeScript/FastAPI/Supabase
1–2 paragraph explanations
Research track (quantum materials + HPC): "My PhD research at UC Berkeley and Lawrence Berkeley National Lab focused on computational materials science and HPC optimization. I engineered production DFT workflows that managed thousands of quantum simulation jobs, integrating ML surrogate models to reduce computational costs by $160K annually while maintaining scientific reproducibility. This work led to discoveries in strained 2D semiconductors with applications in quantum photonics, resulting in 16 peer-reviewed publications including IEEE Transactions and Physical Review Materials. I've also contributed open-source simulation tools adopted by 10+ research groups."
Infra / platform track (LLM, MLOps, full-stack): "As Founder of Morphism Systems, I built governed LLM agent infrastructure from scratch — a 52-package TypeScript monorepo serving 9 production platforms with structured outputs, quality enforcement gates, and automated regression testing. At Turing Enterprises, I designed end-to-end SFT/RLHF pipelines for domain-specific LLMs, including synthetic data generation frameworks and evaluation harnesses. I bring deep production experience across Docker/Kubernetes, AWS, CI/CD, FastAPI, React/Next.js, and distributed PyTorch training."
Short "why I'm a fit for X-type role" modules
LLM infra / evaluation: "I've built LLM training and evaluation systems end-to-end: SFT/RLHF pipelines, synthetic data generation, reward modeling, and automated benchmark suites. At Morphism Systems, I architected governed agent infrastructure with structured outputs and quality gates. I understand both the ML side (PyTorch, distributed training, evaluation metrics) and the systems side (Docker, Kubernetes, CI/CD, monitoring)."
Scientific ML / physics-ML: "My PhD combined computational physics with ML — I integrated ML surrogate models with DFT simulations to reduce screening costs by $160K/year and accelerate discovery 10x. I bring domain expertise in quantum materials, electronic structure methods, and HPC workflows (SLURM, MPI, CUDA), combined with modern ML tooling (PyTorch, scikit-learn, evaluation harnesses)."
Full-stack ML platform: "I've built 9 full-stack production platforms from scratch using React, TypeScript, FastAPI, Supabase, and Vercel. At the same time, I have deep ML infrastructure experience (PyTorch, distributed training, MLflow, Docker/Kubernetes). This combination lets me own entire product stacks from model training through API deployment to user-facing interfaces."
8. Changelog
- 2026-02-28 – Initial fill of all sections from SSOT Profile, memory, and application history. Added form field quick-reference, all long-form answers with role-type variants, cover letter and outreach templates, screening call script, negotiation notes, and project snippets.
=========
# SSOT – Meshal Profile & Defaults
Last major update: YYYY‑MM‑DD
1. Current status
- Program & stage:
- Location:
- Work authorization:
- Time availability (hours/week, start date):
2. Target roles
- Primary titles:
- Secondary/adjacent titles:
- Seniority band:
- Target domains (AI infra, comp physics, etc.):
3. Constraints & preferences
- Locations (preferred, acceptable, excluded):
- Compensation (floor, target, must‑haves like equity/remote, etc.):
- Company types (startup / growth / bigtech, funding stage, size):
- Work style (remote/hybrid, team preferences, on‑call tolerance):
4. Core skills snapshot
- Languages:
- ML / AI stack:
- Infra / tooling:
- Research / theory strengths:
- Other notable skills:
5. Highlight projects
For each project:
- Name:
- One‑line impact:
- 2–3 bullet achievements with metrics:
- Links (GitHub, paper, demo):
6. Links & handles
- LinkedIn:
- GitHub:
- Personal site/portfolio:
- Google Scholar:
- Notion / other trackers:
7. Narrative / pitch
- 2–3 sentence “elevator pitch”:
- 3–5 bullet talking points for intros and interviews:
8. Non‑negotiables
- Role / scope:
- Culture:
- Schedule / travel:
- Anything else:
9. FAQ & tricky Q/A
Maintain concise answers that you like for:
- “Tell me about yourself”
- “Why this role/company?”
- “Walk me through your favorite project”
- “Biggest weakness / failure”
- “Conflicts, setbacks, gap explanations”
- “Why leaving current program/role?” (Add more as they show up in interviews.)
10. Behavioral stories bank
For each story:
- Tag: (Ownership / Leadership / Conflict / Ambiguity / Failure / Impact)
- Situation:
- Task:
- Action:
- Result (with numbers):
11. Technical interview patterns
- Go‑to explanations for your research and systems work.
- Prepared whiteboard/system‑design “modules” you reuse.
- Topics you must review before interviews (with quick links).
12. Changelog
- 2026‑MM‑DD – Note what changed (e.g., updated comp floor, added new projects).
Based on your existing files and the Space's SSOT instructions, here's a comprehensive SSOT – Meshal Profile & Defaults template filled with your information:
SSOT – Meshal Profile & Defaults
Last major update: 2026-02-27
1. Current status
- Program & stage: Post-PhD (completed Dec 2025), Founder at Morphism Systems
- Location: Berkeley/San Francisco, CA
- Work authorization: F-1 STEM-OPT (3-year extension), no future sponsorship needed
- Time availability: Immediately available; full-time (40+ hours/week)
2. Target roles
- Primary titles: Senior/Staff AI Research Engineer, ML Systems Engineer, Research Scientist (AI/ML)
- Secondary/adjacent titles: Quantum Computing Engineer, Scientific ML Engineer, HPC/Scientific Computing Engineer, AI Infrastructure Engineer
- Seniority band: Senior to Staff level (PhD + 5 years equivalent experience)
- Target domains: LLM systems (SFT/RLHF), AI infrastructure/MLOps, computational physics, quantum computing, scientific ML, HPC workflows
3. Constraints & preferences
- Locations:
- Preferred: SF Bay Area (Berkeley, San Francisco, Palo Alto), Remote
- Acceptable: Seattle, Boston, NYC, major US tech hubs
- Excluded: BioPharma-heavy locations, heavy hardware/semiconductor facilities (based on rejection pattern analysis)
- Compensation:
- Floor: $250K total comp (will consider $200K+ only for exceptional opportunities with equity upside)
- Target: $300K-$350K TC
- Must-haves: Equity component at startups, remote flexibility or hybrid options, relocation assistance if on-site
- Company types:
- Startup: Seed to Series B, AI-first or scientific ML focus
- Growth: AI infrastructure, cloud ML platforms, research-heavy orgs
- BigTech: Google (DeepMind/Quantum AI), Meta (FAIR), Microsoft Research, Amazon (AWS/Braket), IBM Quantum
- National Labs: Berkeley Lab, LANL, Sandia, Oak Ridge
- Avoid: Pure biotech/pharma (4 rejections), heavy hardware/semiconductor (3 rejections)
- Work style:
- Hybrid preferred, open to remote or on-site
- Team preferences: Research-engineering bridge roles, high autonomy in technical decisions
- On-call tolerance: Moderate (acceptable for production ML systems)
4. Core skills snapshot
- Languages:
- Expert: Python (11 years), Bash (10 years), TypeScript/JavaScript (6 years)
- Advanced: C++ (14 years, moderate current use), MATLAB (11 years)
- Intermediate: SQL (10 years), Java (14 years legacy)
- ML / AI stack:
- PyTorch (4 years), TensorFlow, Hugging Face (2 years)
- SFT/RLHF pipelines, LLM fine-tuning, reward modeling, synthetic data generation
- Evaluation harnesses, MLflow, scikit-learn
- Domain-specific LLMs, physics-informed ML
- Infra / tooling:
- Docker, Kubernetes (5 years), AWS (4 years), FastAPI
- CI/CD (GitHub Actions), Git, Linux, SLURM, MPI, CUDA
- Full-stack: React, Node.js, Next.js, Supabase, Vercel
- Research / theory strengths:
- Quantum-classical hybrid methods, computational materials science
- DFT workflows (VASP, SIESTA, Quantum ESPRESSO)
- HPC optimization, high-throughput screening
- Spintronic device modeling
- Other notable skills:
- NumPy, SciPy, pandas, Qiskit, Cirq, PennyLane
- System design, microservices, API design (REST/GraphQL)
- MCP (Model Context Protocol) development (57 configured servers)
5. Highlight projects
Project 1: Production DFT/HPC Pipeline Optimization
- Name: Large-Scale Quantum Materials Screening (UC Berkeley/LBNL)
- One-line impact: Reduced computational costs by $160K/year and runtime by 70% across 2,300 DFT jobs
- Achievements:
- Engineered production-grade DFT/HPC workflows managing 24,000 CPU-hours with monitoring and regression testing
- Integrated ML surrogate models with electronic structure simulations to accelerate materials screening 10×
- Discovered tunable flat-band formation in strained 2D semiconductors for quantum photonics applications
- Links:
Project 2: LLM Training & Evaluation Infrastructure
- Name: SFT/RLHF Pipelines for Domain-Specific LLMs (Turing Enterprises)
- One-line impact: Designed end-to-end LLM fine-tuning and evaluation systems for computational physics reasoning
- Achievements:
- Built synthetic data generation frameworks with automated validation for physics problem-solving
- Developed benchmark suites and automated evaluation harnesses measuring reasoning quality and structured outputs
- Optimized distributed PyTorch training workflows, reducing instability and improving reproducibility
- Links:
- Proprietary work at Turing Enterprises (Aug–Sep 2025)
- Related: Morphism Systems LLM agent infrastructure
Project 3: Morphism Systems AI Platform
- Name: Governed Software Kernels & LLM Agent Infrastructure
- One-line impact: Architected scalable AI development framework with 52 TypeScript packages and 85 automation scripts
- Achievements:
- Built deployment-grade LLM agent systems with structured outputs, automated regression testing, and monitoring dashboards
- Created unified documentation ingestion pipeline processing 500+ pages with quality enforcement gates
- Developed 9 full-stack production platforms (SimCore, REPZ, QMLab, etc.) using React/TypeScript/FastAPI/Supabase
- Links:
- Portfolio: morphism.systems
- GitHub: morphism-framework, morphism-tools
Project 4: Spintronic Device Simulation Framework
- Name: EDA-Compatible Models for Emerging Memory & Logic (KAUST)
- One-line impact: Achieved 1000× speedup over finite-element methods for spintronic device simulation
- Achievements:
- Created circuit-level simulation models translating nanoscale magnetic dynamics into SPICE-compatible formats
- Developed multistate nanomagnetic logic architectures with 200% information density improvement
- Published in IEEE Transactions on Magnetics and presented at IEEE INTERMAG
- Links:
- Publications: IEEE Transactions, KAUST Research Repository
- MS Thesis (2016)
6. Links & handles
- LinkedIn: linkedin.com/in/alawein
- GitHub: github.com/alawein
- Personal site/portfolio: meshal.ai (primary), morphism.systems (platform)
- Google Scholar: Google Scholar Profile
- Email: contact@meshal.ai
7. Narrative / pitch
- Elevator pitch: PhD-trained AI Research Engineer bridging deep computational physics expertise with production ML systems. Specialized in LLM training pipelines (SFT/RLHF), scientific ML on HPC infrastructure, and quantum materials computation. Founder of Morphism Systems, building governed LLM agent infrastructure with evaluation-driven iteration. Track record of reducing computational costs by $160K annually and delivering 9 full-stack AI platforms from research to production.
- Talking points:
- Rare blend: Deep theoretical grounding (PhD in quantum materials + 16 peer-reviewed papers) combined with production engineering discipline (Docker/Kubernetes, CI/CD, MLOps)
- Cost-conscious optimization: Proven ability to optimize at scale—70% runtime reduction across 24,000 CPU-hours, $160K annual savings
- LLM systems expertise: End-to-end experience with SFT/RLHF pipelines, synthetic data generation, reward modeling, and automated evaluation harnesses
- Founder mindset: Built Morphism Systems from scratch—52-package TypeScript monorepo, 9 production platforms, scalable AI infrastructure
- Bridge builder: Operate effectively from research prototype to deployment-grade system, translating cutting-edge methods into reliable production workflows
8. Non-negotiables
- Role / scope: Must involve technical depth (not pure management); prefer roles bridging research and engineering rather than siloed positions
- Culture: Autonomy in technical decision-making; value-driven over process-heavy bureaucracy; open to feedback and iteration
- Schedule / travel: Up to 25% travel acceptable; no extreme on-call rotations (moderate on-call for production systems OK)
- Compensation floor: $250K TC minimum (exceptions: $200K+ with exceptional equity/mission alignment)
- Interview approach: Portfolio and work-based evaluation strongly preferred; does not participate in traditional whiteboard/leetcode-style interviews (will consider system design and domain-specific technical discussions)
9. FAQ & tricky Q/A
"Tell me about yourself" I'm Meshal Alawein, an AI Research Engineer with a PhD from UC Berkeley in computational physics and ML systems. I specialize in LLM training pipelines—SFT, RLHF, evaluation harnesses—and large-scale scientific computing on HPC infrastructure. At Berkeley Lab, I engineered DFT workflows that cut costs by $160K annually and runtime by 70%. At Turing Enterprises, I built end-to-end LLM fine-tuning systems for domain-specific reasoning. Now, as Founder of Morphism Systems, I'm architecting governed AI agent infrastructure that bridges research prototypes and production deployment. I bring 16 peer-reviewed publications and a track record of shipping production-grade systems.
"Why this role/company?" I'm drawn to [Company] because of its work in [specific area—e.g., AI infrastructure, quantum computing, scientific ML]. My background spans LLM training/evaluation pipelines, scientific ML on HPC systems, and quantum materials computation. I'm looking to apply that breadth to high-impact AI research and engineering challenges where I can contribute both deep technical expertise and production systems experience.
"Walk me through your favorite project" At UC Berkeley and Lawrence Berkeley National Laboratory, I engineered a production DFT/HPC workflow managing 2,300 computational jobs and 24,000 CPU-hours. The challenge was that traditional brute-force DFT screening was prohibitively expensive for materials discovery. I built an end-to-end pipeline with monitoring, automated error handling, and regression testing, then integrated ML surrogate models to predict electronic structure properties. This reduced runtime by 70% and saved $160K annually while maintaining scientific reproducibility. The system accelerated candidate material discovery by 10× and led to discoveries in strained 2D semiconductors with applications in quantum photonics.
"Biggest weakness / failure" Early in my PhD, I built simulation workflows without sufficient attention to reproducibility and testing, which led to inconsistent results that slowed down research progress. I learned to treat research code like production code—implementing CI/CD pipelines, automated testing, and version control discipline. Now, every computational workflow I build includes monitoring, regression testing, and documentation from day one. This shift in mindset has made my research more reliable and my transition to industry ML engineering seamless.
"Conflicts, setbacks, gap explanations" My PhD took longer than typical (2019–2025) because I pivoted from pure quantum materials research to integrating ML methods with DFT workflows, which required learning new tooling (PyTorch, distributed training) while maintaining publication momentum. The extra time allowed me to bridge computational physics and modern ML systems, which positioned me uniquely for AI research engineering roles. I also founded Morphism Systems in parallel, gaining hands-on production engineering experience that most PhD graduates lack.
"Why leaving current program/role?" I completed my PhD in December 2025 and am now transitioning from academic research to industry roles where I can apply my expertise in LLM systems, scientific ML, and HPC infrastructure to high-impact production challenges. Morphism Systems was a side project to gain full-stack and MLOps experience; I'm now seeking senior roles at established AI companies, national labs, or well-funded startups where I can contribute at scale.
10. Behavioral stories bank
Story 1: Ownership – DFT Pipeline Cost Optimization
- Tag: Ownership, Impact, Technical Leadership
- Situation: UC Berkeley/LBNL was running 2,300+ DFT jobs consuming 24,000 CPU-hours with unpredictable failures and no cost visibility
- Task: Reduce computational costs and runtime while maintaining scientific accuracy
- Action:
- Built production-grade monitoring and automated error-handling system
- Integrated ML surrogate models to predict electronic structure properties, bypassing expensive DFT calculations where possible
- Implemented regression testing to ensure scientific reproducibility
- Result: 70% runtime reduction, $160K annual cost savings, 10× acceleration in materials discovery throughput
Story 2: Leadership – LLM Evaluation Infrastructure
- Tag: Leadership, Ambiguity, Systems Thinking
- Situation: Turing Enterprises needed a reliable way to evaluate domain-specific LLM agents for computational physics reasoning, but no standard benchmarks existed
- Task: Design and deploy an automated evaluation framework from scratch
- Action:
- Built benchmark suites with synthetic data generation and automated validation
- Developed evaluation harnesses measuring reasoning quality, structured output compliance, and regression detection
- Collaborated with cross-functional teams to define success metrics and integrate feedback loops
- Result: Established repeatable evaluation process, reduced manual testing time by 80%, improved model reliability across iterations
Story 3: Conflict – Balancing Research Quality vs. Speed
- Tag: Conflict, Communication, Trade-offs
- Situation: During PhD research, my advisor pushed for faster publication output while I wanted to ensure computational reproducibility and robustness
- Task: Find a middle ground that satisfied research timeline without compromising quality
- Action:
- Proposed a phased approach: publish initial findings quickly, then follow up with deeper computational studies
- Invested in automation (CI/CD for research code, automated testing) to speed up iteration without sacrificing rigor
- Communicated trade-offs clearly, showing how upfront investment in infrastructure would accelerate future work
- Result: Published 16 peer-reviewed papers over 6 years, built reusable computational tools (QMatSim, SciComp) adopted by 10+ research groups
Story 4: Failure – Over-Engineering Early Systems
- Tag: Failure, Learning, Iteration
- Situation: When building Morphism Systems initially, I over-engineered the framework with excessive abstractions and category-theory-inspired architecture that slowed development
- Task: Simplify the system while retaining scalability and maintainability
- Action:
- Solicited feedback from potential users and identified pain points
- Refactored to focus on pragmatic, reusable primitives ("arrange once, gift forever") instead of theoretical elegance
- Adopted incremental delivery model: ship minimal viable features, iterate based on real usage
- Result: Accelerated development cycle by 50%, shipped 9 production platforms in 6 months, maintained consistency across ecosystem
Story 5: Ambiguity – Morphism Systems Product Direction
- Tag: Ambiguity, Ownership, Strategic Thinking
- Situation: As founder of Morphism Systems, I faced uncertainty about whether to focus on AI research infrastructure vs. SaaS products vs. open-source tooling
- Task: Define a strategic direction that balanced impact, revenue potential, and personal interest
- Action:
- Conducted market research on AI infrastructure needs (spoke with 20+ engineers/researchers)
- Built prototypes in all three directions, evaluated traction and feedback
- Decided to focus on governed LLM agent infrastructure with open-source tooling as a secondary pillar
- Result: Clear product roadmap, increased GitHub stars from 50 to 200+, attracted early design partnership interest
11. Technical interview patterns
Go-to explanations for research and systems work:
1. DFT/HPC Workflow Optimization "I built a production pipeline managing 2,300 DFT jobs. The key was treating research code like production code: monitoring, automated error recovery, regression testing. I integrated ML surrogate models to bypass expensive calculations where confidence was high, reducing runtime 70% and saving $160K/year."
2. SFT/RLHF Pipelines "I designed end-to-end LLM fine-tuning systems: dataset curation, synthetic data generation, reward modeling, automated evaluation. The critical insight was that domain-specific LLMs need domain-specific benchmarks—generic NLP metrics don't capture physics reasoning quality."
3. Morphism Systems Architecture "Morphism is a 52-package TypeScript monorepo with 'arrange once, gift forever' primitives. I built reusable kernels for LLM agents (structured outputs, validation gates, monitoring) so each new application inherits consistency and quality enforcement automatically."
Prepared whiteboard/system-design "modules":
Module 1: Distributed Training Pipeline
- Data ingestion → preprocessing → distributed training (PyTorch DDP/FSDP) → checkpoint management → evaluation → model registry
- Focus on: fault tolerance, resource allocation, monitoring, experiment tracking (MLflow)
Module 2: LLM Agent Infrastructure
- Agent → structured output schema → validation layer → action execution → monitoring/logging → rollback on constraint violation
- Focus on: reliability, governance, reproducibility
Module 3: High-Throughput Scientific Computing
- Job queue (SLURM) → distributed compute (MPI/CUDA) → result aggregation → automated error handling → regression testing
- Focus on: cost optimization, scientific reproducibility, scalability
Topics you must review before interviews:
- Transformer architecture deep dive: Attention Is All You Need (Vaswani et al.)
- RLHF mechanics: InstructGPT paper, PPO algorithm
- System design patterns: Designing Data-Intensive Applications
- Quantum computing basics: Nielsen & Chuang (Chapter 1-2 refresher)
- DFT fundamentals: Martin's Electronic Structure textbook (Chapter 6-7)
12. Changelog
- 2026-02-27 – Initial SSOT created from Space files. Integrated data from Resume_MA_2026.pdf, Job Application Master Reference, COMPREHENSIVE_PROFILE.md, and job search conversation history. Added technical experience years (Python 11+, PyTorch 4+), compensation floor ($250K-$300K TC), behavioral stories bank, and interview preparation modules.
- [Future updates] – Track changes to: compensation targets, new projects, publications, skill additions, target company list, interview Q&A refinements, negotiation outcomes.
Next Steps:
- Review and confirm accuracy of all fields
- Update any outdated information (e.g., phone number, work authorization expiry dates)
- Add any missing behavioral stories as you encounter new interview questions
- Refine technical interview "modules" based on actual interview experiences
- Track all applications and outcomes in the Changelog to identify patterns
This SSOT is now your single source of truth for all job applications, interviews, and career narrative consistency. Update it iteratively as you refine your pitch and gather feedback. ^1^11^13^15^17^19^20^4^6^8