Joseph Ulatowski

Generative AI Policy
The introduction of generative AI into education—especially in philosophy classrooms—is quite similar to earlier technological shifts like the arrival of calculators in math and the internet in university courses. Here's how these comparisons help explain the current moment:
🧮 Calculators in Math Classrooms
-
Before calculators: Students had to perform all calculations manually, which built foundational skills but was time-consuming.
-
After calculators: Educators allowed calculators for certain tasks (e.g., complex arithmetic or graphing) but still required students to understand the underlying math. Calculators became tools—not substitutes—for mathematical reasoning.
-
Parallel with AI: Generative AI can help with brainstorming, organizing thoughts, or clarifying concepts, but students still need to understand and construct their own philosophical arguments. Like calculators, AI is a tool that must be used wisely and with oversight.
🌐 Internet in University Courses
-
Before the internet: Research was done in libraries, and students relied heavily on textbooks and lectures.
-
After the internet: Students gained access to vast resources, but also faced challenges like misinformation, plagiarism, and superficial engagement.
-
Parallel with AI: Generative AI offers quick access to ideas and summaries, but it also raises concerns about originality, depth of understanding, and academic integrity. Just as students had to learn how to evaluate online sources critically, they now must learn to use AI responsibly.
🧠 Why These Comparisons Matter
-
They show that new tools don’t replace learning—they reshape it.
-
They highlight the need for clear policies and ethical guidelines.
-
They emphasize the importance of transparency and critical thinking in using any tool.
Would you like this explanation turned into a slide or infographic for your students?
In this course, the use of generative AI tools (e.g., ChatGPT, Copilot, Claude) is permitted only under specific conditions. Philosophy values clarity of thought, originality, and critical reasoning—skills that must be developed through your own intellectual effort. This policy is designed to support your learning while allowing for responsible, transparent use of AI where appropriate.
✅ Permitted Use (With Disclosure)
You may use generative AI tools only when explicitly allowed for a particular assignment or task. When permitted:
-
You must disclose your use of AI in your submission.
-
Your disclosure must include:
-
Which tool(s) you used.
-
How you used them (e.g., brainstorming objections, clarifying definitions, outlining arguments).
-
What content was generated or influenced by AI.
-
-
You are responsible for evaluating, editing, and understanding any AI-assisted content. Submissions must reflect your own philosophical reasoning and voice.
❌ Prohibited Use
Unless explicitly permitted:
-
Do not use AI tools to generate or revise content for essays, discussion posts, exams, or any other coursework.
-
Do not use AI to generate arguments, counterarguments, or interpretations of philosophical texts.
-
Using AI without disclosure is considered academic misconduct and may result in disciplinary action.
📘 Examples
-
✅ Permitted with disclosure:
-
Using ChatGPT to brainstorm possible objections to a utilitarian argument, then writing your own response and noting the AI’s role.
-
Asking an AI to explain a concept like “epistemic justification” in simpler terms to aid your understanding (not to include in your paper).
-
-
❌ Prohibited:
-
Submitting an AI-generated essay on Descartes’ cogito without disclosure.
-
Using AI to paraphrase or summarize readings for a take-home exam.
-
🧾 How to Disclose AI Use
Include a brief statement at the end of your assignment, such as:
“I used ChatGPT to help brainstorm possible objections to my argument about moral relativism. All writing and reasoning are my own.”
Would you like a version of this formatted for a syllabus or assignment sheet? Or would you like help drafting a short explanation to give to students in class?
Here are some examples of responsible AI use in a philosophy classroom, aligned with the idea of AI as a supportive tool—not a substitute for critical thinking:
🧠 Responsible AI Use Examples
1. Brainstorming Ideas
-
A student uses ChatGPT to generate possible objections to their argument about moral realism.
-
They select one objection, refine it, and write their own response.
-
They disclose:
“I used ChatGPT to brainstorm objections to my argument. The objection I chose was revised and developed by me.”
2. Clarifying Concepts
-
A student asks an AI tool to explain “epistemic justification” in simpler terms to aid their understanding.
-
They do not copy the explanation into their paper but use it to deepen their grasp of the concept.
-
No disclosure is needed unless the explanation directly influences their writing.
3. Outlining an Essay
-
A student uses AI to help structure their essay on Kant’s categorical imperative.
-
They write the content themselves and include a note:
“I used Claude to help outline the structure of my essay. All content and arguments are my own.”
4. Improving Grammar or Style
-
A student uses AI to check grammar and improve clarity in their writing.
-
They disclose:
“I used Copilot to check grammar and improve sentence clarity. No content was generated by AI.”
5. Comparing Philosophical Positions
-
A student asks AI to summarize key differences between utilitarianism and deontology for study purposes.
-
They use this to prepare for a class debate but do not submit AI-generated content.
Would you like a printable or shareable version of these examples for students? Or perhaps a classroom activity to help students practice responsible AI use?