# 1102tools > Free, open source Claude Skills and MCP servers for federal contracting professionals and government contractors. 1102tools provides AI-powered tools that query live federal government APIs and produce acquisition deliverables. All tools are free. All APIs are free. Built for the acquisition workforce: contracting officers, contract specialists (GS-1102 series), program managers, cost analysts, and small business government contractors. 1102tools provides two types of tools: 14 Claude Skills (markdown instruction files) and 8 MCP servers (Model Context Protocol servers for deterministic tool calls). The Claude Skills are interactive instruction files for Claude AI. The MCP servers wrap the same APIs as structured tool calls installable via pip or uvx, compatible with Claude Desktop, Claude Code, Cursor, Cline, and any MCP-compatible client. All 8 MCP servers have been independently hardened through 3 to 6 rounds of live audit testing against production APIs, with published testing records documenting every bug found and fixed. Combined across the suite: 719 regression tests and roughly 350 bugs fixed during hardening. The MCPs are the preferred integration path for production use because their tool calls are deterministic (same inputs produce same structured outputs) where raw API-call generation through Claude Skills can vary across runs. Every MCP repository includes a TESTING.md file with per-round probe counts, priority-rated findings (P0 catastrophic, P1 silent-wrong-data, P2 validation gaps, P3 polish), and the defensive-parsing patterns applied to prevent regressions. Claude Skills are distributed as downloadable SKILL.md files that users install in Claude AI (Customize > Skills > Create skill > Upload a skill). MCP servers are distributed via PyPI and installable with `pip install ` or `uvx `. Both are free and open source. All source code is available on GitHub at https://github.com/1102tools. The skills live in the federal-contracting-skills repository. Each MCP server has its own repository. These tools were developed by a senior GS-1102-14 contracting officer with an unlimited warrant. The author has worked within the Department of Defense, including time at the Pentagon. The skills were developed on a Claude Max plan using Opus 4.6 with extended thinking, and reflect real-world acquisition workflows, not theoretical templates. The website is a single-page static site hosted on Cloudflare Pages at https://1102tools.com. It contains the full tool catalog with download links, a getting started guide, example prompts, an architecture diagram showing how skills interconnect, an AI boundaries section defining what these tools will not do, and troubleshooting information. ## API Data Source Skills - [USASpending API](https://1102tools.com/downloads/usaspending-api.zip): Queries USASpending.gov for federal contract and award data. Look up contracts by PIID, find vendor awards, pull transaction histories, get agency spending breakdowns. Returns award descriptions, obligation amounts, period of performance, vendor details, NAICS/PSC codes. No API key required. Covers all federal agencies. - [GSA CALC+ Ceiling Rates](https://1102tools.com/downloads/gsa-calc-ceilingrates.zip): Queries awarded not-to-exceed hourly rates from GSA MAS contracts (230K+ records). Search by labor category, education, experience, SIN, vendor, or business size. Returns rates with statistical aggregations (mean, median, percentiles, std dev). No API key required. These are ceiling rates (maximum a contractor can charge), not prices paid. - [BLS OEWS Wages](https://1102tools.com/downloads/bls-oews-api.zip): Queries Bureau of Labor Statistics Occupational Employment and Wage Statistics covering approximately 830 occupations across 530+ metro areas. Returns employment counts, mean/median wages, and full percentile distributions (10th/25th/50th/75th/90th) at national, state, and metro levels. Requires free BLS API key from data.bls.gov/registrationEngine (500 queries/day). - [GSA Per Diem Rates](https://1102tools.com/downloads/gsa-perdiem-rates.zip): Queries federal travel per diem rates (lodging and M&IE) for all CONUS locations. Look up by city, state, or ZIP. Returns monthly lodging rates with seasonal variations, M&IE breakdowns, and first/last day travel rates at 75%. Requires free api.data.gov key (1,000 req/hr). - [Federal Register API](https://1102tools.com/downloads/federalregister-api.zip): Queries all Federal Register documents since 1994 including proposed rules, final rules, notices, and executive orders. Track FAR cases, find open comment periods, monitor agency rulemaking. Returns document metadata, abstracts, effective dates, docket IDs, CFR parts affected. No API key required. - [eCFR Lookup](https://1102tools.com/downloads/ecfr-api.zip): Queries the full current text of the Code of Federal Regulations, updated daily. Read FAR/DFARS clauses, compare versions back to January 2017, browse section structure. Includes RFO deviation awareness. No API key required. Complements the Federal Register skill (what is changing) by showing what regulations currently say. - [Regulations.gov](https://1102tools.com/downloads/regulationsgov-api.zip): Queries federal rulemaking dockets including proposed rules, final rules, public comments, and docket histories across every agency. The third piece of the regulatory pipeline alongside Federal Register and eCFR. Requires free api.data.gov key (same key as Per Diem). - [SAM.gov API](https://1102tools.com/downloads/sam-gov-api.zip): Queries SAM.gov REST APIs for entity registration data (UEI/CAGE lookups, registration status, business types, NAICS/PSC codes), exclusion and debarment records for responsibility determinations, contract opportunity searches, and contract award data (FPDS replacement) with PIID lookup, modification histories, vendor award searches, and deleted record access. Requires free SAM.gov API key from SAM.gov. Complements USASpending (aggregated spending analysis) by providing raw FPDS records with full field fidelity. ## Orchestration Skills - [SOW/PWS Builder](https://1102tools.com/downloads/sow-pws-builder.zip): Structured scope decision workflow that produces a contract-file-ready SOW or PWS .docx. Starts with an Acquisition Strategy Intake (document type, contract type, commercial vs non-commercial) then walks six scope decision blocks before document assembly. Alongside the .docx, the skill produces a separate chat-only staffing handoff table (labor categories, SOC codes, FTE counts) that feeds the IGCE Builder skills without ever appearing in the contract file deliverable, in compliance with FAR 37.102(d). T&M and LH contracts get a Labor Category Ceiling Hours table in Section 5 per FAR 16.601(c)(2). Supports three workflows: full build from scratch, SOO-to-SOW/PWS conversion, and scope reduction to fit a budget. Grounded in FAR 37.602 (performance-based acquisition). No API key required. - [IGCE Builder Suite](https://1102tools.com/downloads/igce-builder-suite.zip): Three orchestration skills that combine BLS OEWS, GSA CALC+, and GSA Per Diem to build full Independent Government Cost Estimates. Contains three separate skills: FFP (Firm-Fixed-Price) uses layered wrap rate buildup with discrete cost pools for fringe, overhead, G&A, and profit. LH/T&M (Labor Hour and Time-and-Materials) uses burden multiplier pricing. Cost-Reimbursement models CPFF, CPAF, and CPIF with fee structure analysis and statutory fee caps. All three include BLS wage aging factor (Step 2B) that ages wages forward from BLS data vintage to contract start date. Produces multi-sheet Excel workbooks with scenario analysis, rate validation, travel detail, methodology narrative, and raw API data. No additional API key required (uses keys from BLS and Per Diem skills). ## Other Transaction (OT) Skills - [OT Project Description Builder](https://1102tools.com/downloads/ot-project-description-builder.zip): Milestone-based project descriptions for prototype OT agreements under 10 USC 4021/4022. Replaces the SOW/PWS for OT agreements by structuring work around TRL (Technology Readiness Level) progression phases and go/no-go gates instead of task/subtask CLINs. Walks agreements officers through a 4-question intake (OT type, performer type, TRL entry/exit, consortium or direct) and a 6-block scope decision tree covering prototype objective, TRL phases, technical scope, deliverables, agreement structure, and oversight. Handles NDC (nontraditional defense contractor), small business, traditional (with cost sharing per 10 USC 4022(d)(1)(C)), and consortium-brokered (DIU, AFWERX, NavalX, NSTXL) agreements. Produces a .docx project description with 13-14 sections (agreement overview, technical background, prototype objectives, technical approach by phase, milestone schedule, deliverables, data rights, period of performance, government responsibilities, key personnel, reporting, optional cost-sharing arrangement, production follow-on provisions under 4022(f), and constraints) plus a separate chat-only milestone handoff table for the OT Cost Analysis skill. Supports SOW-to-OT conversion and scope reduction workflows. No API key required. - [OT Cost Analysis](https://1102tools.com/downloads/ot-cost-analysis.zip): Should-cost estimates and price reasonableness analyses for OT agreements. Milestone-based pricing citing 10 USC 4021 instead of FAR 15.404. Orchestrates BLS OEWS (labor benchmarking), GSA CALC+ (rate validation), and GSA Per Diem (travel) but structures the estimate around milestones instead of wrap rates or burden multipliers. Handles cost-sharing math per 10 USC 4022(d) for NDC, small business, and traditional performers. Supports consortium management fees (e.g., 4-5% for DIU/AFWERX). Handles fixed-price milestones, cost-type milestones with ceilings, and mixed payment structures. Produces formula-driven Excel workbooks (7 sheets: cost analysis summary, milestone detail, scenario analysis, labor benchmarking, cost-sharing detail with cumulative funding profile, price reasonableness methodology memo, and raw API data). Supports pre-solicitation budget planning (no proposed price) and post-proposal price reasonableness checks. Materials treated as first-class cost element for hardware prototypes (40-70% of cost). No additional API key required (uses keys from BLS and Per Diem skills). ## Architecture and How Skills Connect The SOW/PWS Builder sits at the top of the FAR-based workflow. It produces scope decisions and a contract-file-ready SOW or PWS .docx. Alongside the document, the builder hands off a separate staffing table (labor categories, SOC codes, FTE counts) as chat output, which feeds into any of the three IGCE Builders (FFP, LH/T&M, or Cost-Reimbursement) based on the contract type. The IGCE Builders pull market wage data from BLS OEWS, validate rates against GSA CALC+ ceiling rates, and add travel costs from GSA Per Diem. The OT skills form a parallel chain for work outside the FAR. The OT Project Description Builder produces milestone-based scope documents for prototype agreements under 10 USC 4021/4022, structured around TRL progression phases instead of task/subtask CLINs. It hands off a milestone table (chat output only) to the OT Cost Analysis skill, which builds should-cost estimates with cost-sharing math and price reasonableness memos citing 10 USC 4021 instead of FAR 15.404. The OT Cost Analysis pulls from the same three data sources (BLS OEWS, GSA CALC+, GSA Per Diem) but structures the estimate around milestones instead of wrap rates. USASpending and SAM.gov are standalone reference skills that 1102s use directly for award and entity lookups. The regulatory skills (Federal Register, eCFR, Regulations.gov) form a separate pipeline for tracking rulemaking end-to-end: Federal Register shows what is changing, eCFR shows what the regulation currently says, and Regulations.gov shows public comments and docket histories. The SAM.gov skill provides entity registration lookups, exclusion and debarment checks, contract opportunity searches, and contract award data (the FPDS replacement) with PIID lookups and modification histories. ## API Keys Required Three free API keys cover all 14 skills. BLS API key: register at https://data.bls.gov/registrationEngine/ for 500 queries per day. api.data.gov key: register at https://api.data.gov/signup/ for 1,000 requests per hour (covers GSA Per Diem and Regulations.gov). SAM.gov API key: register at https://SAM.gov for entity, exclusion, opportunity, and contract award queries. All other skills (USASpending, GSA CALC+, Federal Register, eCFR, SOW/PWS Builder, IGCE Builders, OT Project Description Builder, OT Cost Analysis) require no API key. ## Target Audience Primary users are federal acquisition professionals in the GS-1102 contract specialist series, contracting officers, contracting officer representatives (CORs), contract price/cost analysts, and program managers who write requirements. Secondary users are small business government contractors pricing proposals, agreements officers working OT prototype agreements, and anyone who needs to query federal procurement data. The tools are agency-agnostic and work for any federal agency, though examples on the website reference specific DoD component agencies including NAVSEA, NAVWAR, AFRL, AFLCMC, and CECOM. ## AI Boundaries: What These Tools Will Not Do AI accelerates federal contracting when it assembles data and formats human reasoning. It breaks federal contracting when it originates the reasoning itself. The test: if the signer cannot defend every evaluative claim in the final record without pointing back at the tool's output, the tool crossed the line. 1102tools skills are built around three rules. 1. No model reads a proposal. Proposals are adversarial inputs. An offeror can embed hidden instructions, white text, or metadata that manipulates any model that reads them. OWASP and the UK NCSC both treat this as an unsolved security problem. Tools that need to interact with proposal text do so deterministically (rule-based extraction, exact-match quote pulling, requirement-to-proposal crosswalks), not via an LLM. 2. Reasoning originates with the human, not the model. Drafting assistance is allowed when the evaluator or CO supplies both the finding and the rationale in the prompt, and the tool formats it. If the tool generates the why, the tool generated the reasoning, and the output cannot be defended at protest. 3. The record has to be reconstructable. No AI-assisted workflow that can influence exclusion, evaluation, or award is acceptable without tamper-evident retention of the prompt, the model and version, inputs, outputs, and human edits. GAO reviews the administrative record. COFC reviews the administrative record. A workflow whose record cannot be reproduced is a workflow whose decision cannot be defended. Why it matters now: agencies are using commercial LLMs for evaluation-adjacent work. No AI-specific protest has sustained yet, but GAO's most prevalent FY2025 sustain ground was unreasonable technical evaluation, and the third was unreasonable rejection of proposal (new to the top three). Both map directly to AI-assisted workflows that flag, rate, or eliminate without human reasoning behind the result. The failure mode is not bad writing. Modern models write persuasively. The failure is asserting proposal-to-FAR connections that the proposal does not actually support. That pattern produces sustains. Full reasoning: https://github.com/1102tools/federal-contracting-skills/blob/main/AI-BOUNDARIES.md ## Regulatory Framework The tools are grounded in the Federal Acquisition Regulation (FAR) and Other Transaction authorities. Key references: FAR 15.402 (cost and pricing data), FAR 15.404-1 (proposal analysis techniques), FAR 15.404-4 (profit analysis), FAR 16.202 (FFP contracts), FAR 16.601 (T&M contracts), FAR 16.301-16.307 (cost-reimbursement contracts), FAR 37.602 (performance-based acquisition), FAR Part 10 (market research), 10 USC 3322(a) (statutory fee caps), 10 USC 4021 (prototype projects), 10 USC 4022 (other transaction authority, cost-sharing requirements, follow-on production), 10 USC 3014 (nontraditional defense contractor definition). ## Example Prompts These prompts work with Claude Pro and Max plans using the installed skills. Orchestration prompts are best on Opus: "Show me the top 10 AFRL contracts awarded in FY2025 by dollar value, with vendor, amount, and description." Uses USASpending API to pull live award data with vendors, amounts, and descriptions. "Pull the current text of FAR 15.305 and summarize what it requires in plain language." Uses eCFR to return full regulatory text with a plain-language summary. "A vendor proposes $195/hr for a Senior Software Developer on an FFP contract in the DC metro. Is that reasonable?" Uses BLS OEWS and GSA CALC+ to cross-reference wages and ceiling rates, positioning the rate within market distributions. "My ceiling is $7M but the IGCE for my 15-person bilingual contact center with an AI chatbot came back at $12M. What scope can I cut to get there?" Uses the SOW/PWS Builder scope reduction workflow to produce ranked cuts that hit the budget ceiling. "Build an FFP IGCE for IT modernization at CECOM. 12 labor categories, DC performance with quarterly San Diego travel, base plus 4 options." Uses IGCE Builder FFP to pull BLS wages, validate against CALC+, build the wrap rate model, and produce a multi-sheet workbook. "Build a CPFF IGCE for a research support contract at NIH. 6 labor categories, 5% fixed fee, base plus 2 options, $500K ODCs per year." Uses IGCE Builder CR to produce a layered cost pool buildup with fee analysis and ODC integration. "Draft a PWS for a cloud migration at NAVWAR. 200 users, 5 apps to AWS GovCloud, ATO support, 24/7 ops, base plus 3 option years, T&M." Uses the SOW/PWS Builder to draft a PWS .docx with a staffing handoff for the IGCE Builder. Run IGCE in a fresh conversation. "Look up Leidos in SAM.gov by UEI QVZMH5JLF274. Show me their registration status, CAGE code, physical address, and primary NAICS." Uses SAM.gov API to return entity registration details, address, and business classifications. ## Comparison to Paid Alternatives These tools provide free, open source alternatives to capabilities sold by commercial vendors. The USASpending and SAM.gov skills together replace paid contract intelligence platforms like GovWin (Deltek, $15,000+/year), SAM.gov Premium (CGI Federal), and Federal Compass for federal spending data and entity lookups. The IGCE Builder skills replace manual spreadsheet templates and paid cost estimation tools like ProPricer (Executive Business Services) and SEER by Galorath. The regulatory skills (Federal Register, eCFR, Regulations.gov) replicate functionality found in Bloomberg Government ($6,000+/year), Lexis+ Federal Regulatory Tracking, and Westlaw Edge for regulatory monitoring and compliance. The GSA CALC+ and BLS OEWS skills replace paid rate benchmarking tools like Payscale Government and ERI Economic Research Institute salary databases. The key difference is that these skills query the same underlying free government APIs that the paid tools use, but remove the access friction through natural language interaction with Claude AI. ## Limitations and Disclaimers These are estimation and research tools, not official government systems. IGCE outputs are draft estimates that require professional review before inclusion in contract files. Market research reports are starting points, not final deliverables. BLS wage data has a ~2-year lag (the IGCE builders compensate with an aging factor). CALC+ rates are ceiling rates, not prices paid. Per diem covers CONUS only. The tools work best on Claude Pro and Max plans with the Opus model. Free tier users will experience shorter outputs and occasional truncation on complex workflows. ## Frequently Asked Questions Q: Are these tools really free? A: Yes. All skills are free and open source under MIT license. All government APIs they query are free. The only cost is a Claude Pro or Max subscription to use Claude AI itself. Q: Do I need a paid Claude plan? A: The skills work on any Claude plan including free, but performance is significantly better on Pro and Max. Opus handles orchestration skills (IGCE Builder, SOW/PWS Builder) noticeably better than Sonnet. Free tier users will experience shorter outputs and occasional truncation on complex workflows. Q: Do these work with ChatGPT? A: Not yet. These are Claude Skills built for Claude AI. ChatGPT Custom GPT versions are planned. Q: Do I need to know how to code? A: No. Install the skill file, ask a question in plain English, and Claude handles the API calls automatically. No coding, no configuration, no command line. Q: Are these approved by my agency IT department? A: The skills are instruction files that run through Claude's existing infrastructure. No data leaves differently than a normal Claude conversation. API calls go directly from Claude to public government endpoints. Check with your agency's Claude AI authorization status. Q: How accurate are the IGCE estimates? A: The estimates use real BLS wage data and GSA ceiling rates, not made-up numbers. However, they are draft estimates that require professional review. The BLS data has a ~2-year lag which the aging factor compensates for. Cost pool rates (fringe, overhead, G&A) use industry defaults that should be adjusted to match your specific contractor environment. Q: Can I use these for contracts outside DoD? A: Yes. The tools are agency-agnostic. USASpending covers all federal agencies. BLS wages are national. CALC+ covers all GSA MAS schedules. The examples reference NAVSEA, NAVWAR, AFRL, AFLCMC, and CECOM but the tools work for any federal agency, civilian or defense. Q: What is the difference between these and GovWin or Bloomberg Government? A: GovWin, Bloomberg Government, and similar platforms charge thousands per year for access to federal data. These skills query the same underlying free government APIs (USASpending.gov, BLS, GSA, eCFR, Federal Register) but remove the access friction through natural language interaction with Claude AI. The paid platforms offer additional features like alerts, dashboards, and customer support. These skills offer raw data access and automated deliverable generation at no cost. Q: How do I report a bug or request a feature? A: Open an issue on GitHub at https://github.com/1102tools/federal-contracting-skills/issues with the skill name, what you asked Claude, and what happened. Q: How often are the skills updated? A: Updates are pushed as needed when APIs change, bugs are found, or features are added. The GitHub repository always has the latest version. Check the version number in the README table against what you have installed. Q: Can I modify these skills for my own use? A: Yes. MIT license means you can use, modify, and distribute freely. Fork the repo and customize to your needs. ## Source Code - [Federal Contracting Skills Repository](https://github.com/1102tools/federal-contracting-skills): All 14 acquisition skills with README, install instructions, version history, and MIT license. ## Author and Social - [GitHub Organization](https://github.com/1102tools): All skill and MCP server repositories. - [PyPI Publisher](https://pypi.org/user/1102tools/): All published MCP server packages. - [X](https://x.com/1102tools): Release notes, tips, and acquisition-workforce commentary. - [Gravatar](https://gravatar.com/1102tools): Project profile and avatar. - [James Jenrette on LinkedIn](https://www.linkedin.com/in/jamesjenrette/): Author and maintainer. Senior federal contracting officer (GS-1102-14) with an unlimited warrant. ## MCP Servers (Model Context Protocol) All 8 API data source skills are also available as MCP servers. MCP servers provide deterministic structured tool calls instead of prompt-interpreted API instructions. They are compatible with Claude Desktop, Claude Code, Cursor, Cline, Continue, Zed, and any MCP-compatible client. 82 tools across 8 servers. | Server | Package | Tools | Auth | Install | |--------|---------|-------|------|---------| | USASpending | `usaspending-gov-mcp` | 17 | None | `pip install usaspending-gov-mcp` | | SAM.gov | `sam-gov-mcp` | 15 | API key | `pip install sam-gov-mcp` | | eCFR | `ecfr-mcp` | 13 | None | `pip install ecfr-mcp` | | GSA CALC+ | `gsa-calc-mcp` | 8 | None | `pip install gsa-calc-mcp` | | BLS OEWS | `bls-oews-mcp` | 7 | Optional key | `pip install bls-oews-mcp` | | GSA Per Diem | `gsa-perdiem-mcp` | 6 | Optional key | `pip install gsa-perdiem-mcp` | | Federal Register | `federal-register-mcp` | 8 | None | `pip install federal-register-mcp` | | Regulations.gov | `regulationsgov-mcp` | 8 | Optional key | `pip install regulationsgov-mcp` | MCP server repositories: - [usaspending-gov-mcp](https://github.com/1102tools/usaspending-gov-mcp) - [sam-gov-mcp](https://github.com/1102tools/sam-gov-mcp) - [ecfr-mcp](https://github.com/1102tools/ecfr-mcp) - [gsa-calc-mcp](https://github.com/1102tools/gsa-calc-mcp) - [bls-oews-mcp](https://github.com/1102tools/bls-oews-mcp) - [gsa-perdiem-mcp](https://github.com/1102tools/gsa-perdiem-mcp) - [federal-register-mcp](https://github.com/1102tools/federal-register-mcp) - [regulationsgov-mcp](https://github.com/1102tools/regulationsgov-mcp) Skills vs MCP servers: Skills are best for interactive, conversational workflows where Claude adapts based on context. MCP servers are best for automation, agent workflows, deterministic tool calls, and environments where API credentials should not enter the model context. The orchestration and document-generation skills (IGCE Builders, SOW/PWS Builder, OT Project Description Builder, OT Cost Analysis) remain skills only because they are decision trees and document generators, not API wrappers. ## Optional - [Sitemap](https://1102tools.com/sitemap.xml): XML sitemap listing all pages and download URLs. - [robots.txt](https://1102tools.com/robots.txt): Fully permissive robots.txt allowing all crawlers. - [USASpending API Download](https://1102tools.com/downloads/usaspending-api.zip): Direct download link. - [GSA CALC+ Download](https://1102tools.com/downloads/gsa-calc-ceilingrates.zip): Direct download link. - [BLS OEWS Download](https://1102tools.com/downloads/bls-oews-api.zip): Direct download link. - [GSA Per Diem Download](https://1102tools.com/downloads/gsa-perdiem-rates.zip): Direct download link. - [Federal Register Download](https://1102tools.com/downloads/federalregister-api.zip): Direct download link. - [eCFR Download](https://1102tools.com/downloads/ecfr-api.zip): Direct download link. - [Regulations.gov Download](https://1102tools.com/downloads/regulationsgov-api.zip): Direct download link. - [SAM.gov API Download](https://1102tools.com/downloads/sam-gov-api.zip): Direct download link. - [SOW/PWS Builder Download](https://1102tools.com/downloads/sow-pws-builder.zip): Direct download link. - [IGCE Builder Suite Download](https://1102tools.com/downloads/igce-builder-suite.zip): Direct download link. - [OT Project Description Builder Download](https://1102tools.com/downloads/ot-project-description-builder.zip): Direct download link. - [OT Cost Analysis Download](https://1102tools.com/downloads/ot-cost-analysis.zip): Direct download link. ## MCP Hardening and Testing Records Every MCP server in the 1102tools suite has been independently hardened through a systematic multi-round audit program. The playbook evolved across the eight MCPs: early rounds used mocked responses for fast iteration; later rounds used live API keys for real production testing that mocks cannot simulate. Key testing discipline lesson propagated across all 8: earlier test suites awaited raw coroutines, which bypassed FastMCP's pydantic validation layer and missed entire classes of bugs. The hardening program switched to invoking tools through the real MCP client pipeline (mcp.call_tool) which surfaced dozens of integration issues hidden from unit tests. Cross-MCP patterns established during hardening: pydantic extra='forbid' on every tool's argument model (prevents typo'd parameter names from silently dropping filters and returning unfiltered default data, originating from a sam-gov-mcp live audit), defensive response parsing helpers (_safe_dict, _as_list, _safe_int, _ensure_dict_response) to tolerate unusual API shapes (None, bare list, int, string, XML-to-JSON single-element dict collapse), WAF filter calibration against actual production API behavior rather than guesses, control-character rejection in all free-text fields, at-least-one-filter rules on unbounded-default search tools, and response-shape guards. Testing records are published for every MCP. Each record documents: audit round structure with probe counts, priority-rated findings (P0 through P3) with individual bug descriptions and fixes, test coverage by file, release history across versions, cross-MCP pattern origins, and honest what-was-not-tested caveats. Records are available in two formats: markdown (TESTING.md in each MCP repository, rendered on GitHub) and styled PDF (downloadable from 1102tools.com). - [USASpending.gov MCP Testing Record](https://1102tools.com/downloads/USASpending_MCP_Testing_Record.pdf): usaspending-gov-mcp v0.2.3. 62 regression tests (52 offline + 10 live-gated) across 4 audit rounds. 10 P1 silent-wrong-data bugs fixed including search_awards() with no filter arguments silently returning 25 unfiltered recent contracts. 5 P2 validation gaps closed. 28+ integration issues surfaced in round 1 after switching from raw coroutine tests to real MCP pipeline invocation. - [eCFR MCP Testing Record](https://1102tools.com/downloads/eCFR_MCP_Testing_Record.pdf): ecfr-mcp v0.2.1. 101 regression tests (88 offline + 13 live) across 5 audit rounds. 72 total bugs (2 P0, 26 P1, 32 P2, 12 P3) including a catastrophic P0 where search_cfr silently dropped every filter argument due to httpx stripping the query string when both a path query and a params dict were provided. Payload-bomb class bugs on empty-string inputs (23 MB responses) fixed. - [GSA CALC+ MCP Testing Record](https://1102tools.com/downloads/GSA_CALC_MCP_Testing_Record.pdf): gsa-calc-mcp v0.2.2. 117 regression tests (109 offline + 8 live) across 4 live audit rounds plus retro audit. 86 total bugs fixed including filtered_browse() returning 265,000 unfiltered records, pydantic bool-to-int silent coercion on sin parameters, NaN/Inf values passing float validation, and Elasticsearch 10k-result window overflow. - [GSA Per Diem MCP Testing Record](https://1102tools.com/downloads/GSA_PerDiem_MCP_Testing_Record.pdf): gsa-perdiem-mcp v0.2.1. 172 regression tests (164 offline + 8 live) across 6 rounds including a real-key live audit that found 3 P1 silent-wrong-data bugs mocks could not catch. 55 total bugs. Signature bug: city="Martha\u2019s Vineyard" (typographic apostrophe) silently matched Andover, MA with no warning. - [SAM.gov MCP Testing Record](https://1102tools.com/downloads/SAM_gov_MCP_Testing_Record.pdf): sam-gov-mcp v0.3.1. 79 regression tests across 4 rounds plus live audit with real SAM.gov key. 46 items fixed. This MCP is the origin of the extra='forbid' cross-fix back-ported to all 8 MCPs: a typo'd parameter name was silently dropped by pydantic's default extra='ignore', letting a search return 736,007 unfiltered entities. - [BLS OEWS MCP Testing Record](https://1102tools.com/downloads/BLS_OEWS_MCP_Testing_Record.pdf): bls-oews-mcp v0.2.2. 60 regression tests (55 offline + 5 live) across 5 rounds including a retroactive live audit. 22 bugs fixed. Signature P0 usability bug: occ_code validator rejected the standard BLS format "15-1252" (with dash) because the regex required only digits, but every BLS website example uses dashes. 12 distinct response-shape crash paths caught in mock fuzzing round. - [Federal Register MCP Testing Record](https://1102tools.com/downloads/Federal_Register_MCP_Testing_Record.pdf): federal-register-mcp v0.2.2. 77 regression tests (64 offline + 13 live) across 4 rounds. ~30 items fixed. Signature bug: list_agencies crashed on every call because the return type was declared as dict but the Federal Register API returns a bare list of 470 agencies. Pydantic validation ran every tool return against its annotation. - [Regulations.gov MCP Testing Record](https://1102tools.com/downloads/Regulations_gov_MCP_Testing_Record.pdf): regulationsgov-mcp v0.2.0. 51 regression tests (46 offline + 5 live) across 3 rounds. 22 bugs (1 P0, 10 P1, 7 P2, 4 P3). Signature bug: passing agency_id="" (empty string) to search_documents silently returned all 1,951,938 Regulations.gov documents because the empty string was treated as no filter. ## MCP Security and Release Discipline All 8 MCPs use PyPI Trusted Publisher via OpenID Connect for releases. No API tokens are stored in GitHub, no credentials are handed to Claude or any automation, and no human pastes secrets into terminals. Every tagged release triggers a GitHub Actions workflow that runs the regression test suite, builds the wheel and sdist, and publishes to PyPI through OIDC. This is 2024 best practice for Python package distribution and eliminates an entire class of credential-leak risks. Source repositories follow consistent conventions across all 8 MCPs: CHANGELOG.md for release notes, tests/ folder with regression and stress test suites, TESTING.md for the hardening record, .github/workflows/publish.yml for Trusted Publisher, and an updated README with a testing-record callout above the install instructions. The federal-contracting-skills repository README includes a Testing and Validation section with per-MCP audit-round counts and bug-count statistics linking to each individual record.