AI Generated Responses from Applicants
My organization uses Foundant for Grant, Scholarship, and Assistantship applications. We recently received two applications that were generated by AI (ChatGPT). The prompt to the essay was copied, pasted, and entered into ChatGPT and the response generated was submitted directly to the scholarship application as the applicant's work. We tested it, and it came back nearly identical.
I am curious if anyone has implemented an AI policy, or verbiage on applications regarding the use of AI on applications. Our Board is meeting for their annual meeting in a few weeks and this is a topic of discussion on our agenda, I would like to bring suggestions and examples of other nonprofit organizations to help aid in their discussion.
Thank you in advance for your response.
Comments
-
Great topic, Carissa. Kudos to you for even catching that AI response. How did you know? We have not developed any policies or guidelines re AI but I will be following this conversation to learn from others.
0 -
Honestly, it was the way that the response was formatted I thought was odd to have summary bullet points for an essay. We have another that was also very eloquently written and redundant (which AI has a tendency to do as well).
1 -
I just came to compass to post this same question. On one scholarship, we have 11 essays that are nearly identical. I would say 90% or more. What are you using to test the work? We will be looking at a policy on it soon and inserting specific AI language in next year's application. However, I consider it falls under plagiarism and am seriously considering disqualifying those applicants for that scholarship.
Would love to hear other's experiences, thoughts and policies.
1 -
I copied our Scholarship essay prompt to Chat GPT and it generated a nearly identical response, that's what gave it away.
1 -
We do not currently have an AI policy but have the students "check" a box that certifies the above work is their original work otherwise staff can remove their application from consideration.
We use the website: https://gptzero.me/ to detect for AI work.
4 -
Great suggest and information, Katie! Thank you!
2 -
We have just introduced an AI Policy. However, it addresses staff use of AI tools and essentially allows for the use of AI but all generated work must be reviewed by humans before being launched publicly.
I tested a response to one of our scholarship questions into a website called Copyleaks. I very quickly used up my free option, but I know there are other trackers if you Google them. Of the six I tested, only one came back as AI generated. As this is the first year we are even thinking about AI I'm not sure if this is a huge issue for us at this point.
I recently attended a webinar hosted by the Technology Association of Grantmakers (TAG) where they shared a couple of IT policy templates, including one for AI. I'm attaching it in case it helps.
Michelle Collins
Oakville Community Foundation
4 -
Has anyone considered doing away with essays? Maybe replacing it with one or two short response questions so that it would be more trouble to bother with AI generation than answering the short questions.
3 -
@AlexQuesada we considered a video response instead of essays but unfortunately with the number of applications our committee reviews, video responses just aren't practical. I'm leaning toward your above suggestion - short answers but encouraging students to discuss specific projects/programs they've completed, trying to force answers into a more "personal" and local direction.
3 -
@AlexQuesada - I believe you are on the right track. We have a mix of personal statement type of responses, some of which are generally longer essays and some that are more like short answer responses. I did find this year that the short answer responses were much less likely to be AI generated and more personal in general. However, reviewers were more frustrated because short answers responses usually meant more to read and score. There is still work to do on that front, but I specifically flagged that shorter responses were usually not a full AI response for reviewers when debriefing the process this year.
2