Inference Practice Test
•11 QuestionsPublic agencies increasingly rely on algorithmic tools to allocate resources, enforce rules, and prioritize inspections. Two competing transparency models dominate the debate. The first is maximal disclosure: publish the source code, training data documentation, and decision logic so that any member of the public can scrutinize the system. The second is constrained auditability: allow qualified auditors access to the system and its inputs under nondisclosure agreements or statutory confidentiality, then publish detailed assessment reports, error analyses, and mitigation steps, but not the full technical artifacts. Proponents of disclosure argue that democratic legitimacy demands that the public see how the state is being automated. Yet practitioners note important trade-offs. In a fare-evasion detection system used by one transit authority, full publication of model details led to rapid adaptation by fare evaders, who learned which stations and times were deprioritized; enforcement efficacy fell until the model was retrained and thresholds were altered. By contrast, an air-quality inspection triage tool subjected to third-party auditing, with synthetic test cases and input perturbation studies performed under confidentiality, surfaced biases against small manufacturers without revealing enough about the model to enable easy gaming. In both instances, courts permitted protection of certain proprietary features, provided that impact assessments were thorough and made public. Small municipalities face capacity constraints: they lack in-house data scientists and cannot sustain repeated model retraining if strategies are widely exposed. Hybrid regimes have emerged in response: agencies publish high-level model cards, decision policies, and error bounds, while enabling sandbox audits in which civil-society groups probe behavior using curated datasets. This approach aims to balance two values—explainability and enforceability—without committing to either complete secrecy or complete openness. Importantly, nothing in the hybrid model precludes eventual disclosure where risks of gaming are low; rather, the model is tailored to the risk profile of the domain, the stakes of errors, and the adaptability of regulated parties.
Which one of the following can be properly inferred from the passage?
Which one of the following can be properly inferred from the passage?