Regulation & Policy for the use of Artificial Intelligence
At the Kashf Initiative, we are committed to fostering academic integrity, transparency, and inclusivity in our approach to research, learning, and resource creation. Artificial Intelligence (AI) serves as a tool to enhance our capabilities, enabling us to deliver high-quality resources while upholding rigorous academic and ethical standards. This policy outlines how we use AI across our initiatives.
Purpose and Scope
This policy governs the use of AI tools and technologies within the Kashf Initiative, specifically for creating, collating, and disseminating knowledge resources such as guides, manuals, handbooks, and presentations. It also defines boundaries for sessions or resources that require specialized expertise and highlights our commitment to transparency and ethical practices.
Use of AI for the platform
AI in Resource Creation
AI tools are used for: Drafting and organizing educational guides, manuals, handbooks, and PowerPoint presentations.
Streamlining information collation from credible, open-access sources.
Generating initial drafts, which are rigorously reviewed and edited by team members to ensure accuracy and relevance.
AI in Knowledge Delivery
For sessions focusing on information literacy (e.g., open-access resources, literature search tools): AI may be used to supplement content creation, ensuring up-to-date and relevant material.
For sessions that demand expertise (e.g., disciplinary knowledge, advanced methodologies): Content is developed using standard, peer-reviewed sources, scholarly literature, and input from experts. Proper references and citations are included for all materials.
Acknowledgment and Transparency
Human Contributions: For resources and sessions relying on expertise, we will provide due credits to individuals or organizations that contribute directly to the material or discussions.
Accuracy and Verification: AI-generated content is cross-checked with credible sources to ensure accuracy, avoid misinformation, and maintain academic rigor.
Bias Mitigation: AI tools are used cautiously to minimize any potential biases in the content generated. The team actively reviews AI outputs to ensure inclusivity and relevance.
AI Limitations
We recognize that while AI is a powerful tool, it has its limitations:
AI cannot replace human expertise, critical thinking, or academic judgment.
AI outputs may lack cultural, regional, or disciplinary context, which the team incorporates during review.
For critical topics or sessions requiring deep specialization, we rely solely on standard academic sources, expert contributions, and peer-reviewed literature.
Policy Review and Updates
This policy will be reviewed annually to ensure it remains aligned with technological advancements and the evolving needs of the Kashf Initiative.