business resources

AI And Taxes: Responsible AI, Explainability & Due Process

2 Dec 2025, 9:02 am GMT

The deployment of AI systems in tax administration requires rigorous frameworks for responsible development, transparent decision-making, and democratic accountability. Drawing on recent research on AI's role in enhancing institutional quality across BRICS-plus countries, this chapter outlines comprehensive approaches to ethical AI deployment that strengthen rather than undermine democratic governance principles.

Tax audit selection and compliance assessment systems fall clearly within the EU AI Act's definition of high-risk AI applications due to their significant impact on fundamental rights and access to essential public services. This classification triggers comprehensive regulatory obligations that must be integrated into the system design from inception, rather than being retrofitted after deployment.

Risk management systems operate throughout the complete AI lifecycle, beginning with initial impact assessment during system design and continuing through deployment, monitoring, and eventual decommissioning. Technical risk assessments evaluate model accuracy, potential failure modes, and robustness to adversarial attacks. Operational risk assessments consider implementation challenges, staff training requirements, and the complexity of integration with existing systems.

Data governance frameworks address training data quality, representativeness, and potential sources of bias. Historical audit data may contain systematic biases that reflect past enforcement patterns, socioeconomic factors, or geographic variations, which could perpetuate unfair treatment. Bias mitigation techniques include dataset rebalancing, synthetic data augmentation, and fairness-aware machine learning algorithms that explicitly optimise for equitable outcomes.

Quality management systems ensure ongoing compliance with accuracy, reliability, and performance standards. Automated testing validates model performance across different scenarios, whilst human oversight verifies system behaviour in edge cases. Change management processes ensure that system modifications maintain compliance whilst enabling necessary improvements.

Documentation requirements include comprehensive model cards that describe training methodologies, performance characteristics, intended use cases, and known limitations. Technical documentation enables regulatory audit, whilst operational documentation supports effective human oversight and system maintenance.

Vertical equity and fairness implementation

artical 18092925-12.jpeg
GenAI in Tax Function | An infographic made by Dinis guarda 

Vertical equity, ensuring fair treatment across different income levels, presents particular challenges for the design of AI systems in tax enforcement. Traditional enforcement patterns often exhibit systematic biases where middle-income taxpayers face disproportionate audit rates, whilst high-income taxpayers with sophisticated representation and complex structures receive relatively less scrutiny.

Implementing fairness metrics requires sophisticated statistical frameworks that can identify and measure different types of potential discrimination. Demographic parity ensures audit selection rates remain proportional across income brackets when controlling for legitimate risk factors. Equalised odds ensure prediction accuracy remains consistent across different taxpayer segments. Individual fairness ensures that similar taxpayers receive similar treatment, regardless of their protected characteristics.

The research on AI's interaction with institutional quality across BRICS-plus countries reveals bidirectional causality between AI deployment and governance quality, indicating that responsible AI implementation can strengthen institutional frameworks when properly designed. However, the research also notes the necessity of caution regarding AI-institutional quality interactions, emphasising the importance of robust governance measures to mitigate potential adverse effects.

Pre-deployment testing employs comprehensive bias auditing using multiple statistical techniques. Disparate impact analysis compares audit selection rates across demographic groups whilst controlling for legitimate risk factors. Causal inference methods attempt to isolate the effects of protected characteristics on enforcement decisions. Stress testing evaluates system behaviour under different scenarios and population distributions.

Ongoing monitoring systems track fairness metrics in production environments, identifying emerging bias patterns that might develop as economic conditions or compliance patterns evolve. Automated alerting systems notify governance teams when fairness metrics exceed acceptable thresholds, triggering investigation and potential corrective action.

Explainability frameworks and stakeholder communication

Untitled-2-14.png
Application of AI in Financial modeling | An infographic made by Dinis guarda 

Explainable AI implementation must address diverse stakeholder needs whilst maintaining appropriate levels of detail and technical sophistication for different audiences. Tax officials require detailed feature attribution and confidence measures. Taxpayers require clear and accessible explanations of audit selection or compliance assessment decisions. Legal representatives preparing appeals require sufficient technical detail to enable meaningful challenge.

SHAP (Shapley Additive explanations) value implementation provides mathematically rigorous attribution of model decisions to input features. Each audit selection includes decomposition showing relative contribution of different factors: "Selected based on: industry benchmark deviations (0.35 contribution), expense timing patterns (0.25), network risk indicators (0.25), historical compliance patterns (0.15)." These attributions enable both human oversight and stakeholder communication.

Attention visualisation techniques for neural network models reveal which specific inputs most influenced particular decisions. Proves particularly valuable for document analysis and text processing applications, where understanding model focus areas helps validate the appropriateness of decisions.

Counterfactual explanations examine how different input values might alter model outputs, enabling stakeholders to understand the sensitivity and robustness of their decisions. "If reported expenses were 10% lower, audit probability would decrease to the 15th percentile", provides actionable insight for both oversight and taxpayer understanding.

Evidence-based explanation validation ensures that AI explanations accurately reflect actual model behaviour rather than providing plausible but incorrect reasoning. Requires ongoing testing and calibration of explanation systems against model internals and decision outcomes.

Contestability and democratic due process

Democratic governance requires that AI-influenced decisions remain subject to effective challenge through established legal procedures. Tax administration AI systems must integrate seamlessly with existing appeals processes whilst providing documentation and explanation capabilities necessary for meaningful review.

Complete audit trail implementation captures all relevant information about AI-assisted decisions, including input data used, model versions and configurations, confidence scores and uncertainty estimates, human oversight activities, and any manual adjustments or overrides. Blockchain-based logging ensures the immutability of audit trails while maintaining appropriate access controls.

Appeals support systems provide specialised tools for reviewing contested AI decisions. Case reconstruction capabilities recreate the exact system state at the time of decision, enabling a precise review of decision-making processes. Sensitivity analysis identifies which input factors most influenced contested decisions whilst exploring robustness to minor data variations.

Expert witness capabilities ensure that technical staff can effectively explain the operation of AI systems in legal proceedings. Standardised explanation formats, technical documentation suitable for legal review, and training in legal communication enable effective system defence whilst maintaining transparency about capabilities and limitations.

Legal representation support includes access to sufficient technical details for meaningful challenge preparation, while protecting operational security and system integrity. This balance requires careful consideration of what information enables a fair contest without compromising system effectiveness.

Research integration and continuous improvement

The BRICS-plus research findings emphasise the importance of treating AI governance as a dynamic capability requiring continuous development rather than a static compliance exercise. The research demonstrates that nations with more adaptive governance frameworks achieve better long-term outcomes from the deployment of AI.

Academic collaboration enables ongoing validation of system fairness, effectiveness, and alignment with democratic values. Partnerships with universities provide independent research capabilities while contributing to a broader understanding of AI governance in public sector applications.

International best practice monitoring ensures continuous learning from global experiences in the responsible deployment of AI. Regular comparison with international developments identifies opportunities for improvement while maintaining awareness of emerging risks and mitigation strategies.

Stakeholder engagement processes include regular consultation with taxpayer representatives, civil society organisations, and professional bodies. This engagement ensures that governance frameworks evolve in line with stakeholder needs, while maintaining legitimacy and public trust, which are essential for long-term success.

The responsible AI framework described here recognises that technological capability must be matched by institutional wisdom and democratic accountability. Success requires viewing transparency, fairness, and citizen rights not as constraints on AI deployment but as essential enablers of sustainable transformation that serve both operational efficiency and democratic governance objectives.

Share this

Dinis Guarda

Author

Dinis Guarda is an author, entrepreneur, founder CEO of ztudium, Businessabc, citiesabc.com and Wisdomia.ai. Dinis is an AI leader, researcher and creator who has been building proprietary solutions based on technologies like digital twins, 3D, spatial computing, AR/VR/MR. Dinis is also an author of multiple books, including "4IR AI Blockchain Fintech IoT Reinventing a Nation" and others. Dinis has been collaborating with the likes of  UN / UNITAR, UNESCO, European Space Agency, IBM, Siemens, Mastercard, and governments like USAID, and Malaysia Government to mention a few. He has been a guest lecturer at business schools such as Copenhagen Business School. Dinis is ranked as one of the most influential people and thought leaders in Thinkers360 / Rise Global’s The Artificial Intelligence Power 100, Top 10 Thought leaders in AI, smart cities, metaverse, blockchain, fintech.