AI Generated Responses from Applicants

My organization uses Foundant for Grant, Scholarship, and Assistantship applications. We recently received two applications that were generated by AI (ChatGPT). The prompt to the essay was copied, pasted, and entered into ChatGPT and the response generated was submitted directly to the scholarship application as the applicant's work. We tested it, and it came back nearly identical.
I am curious if anyone has implemented an AI policy, or verbiage on applications regarding the use of AI on applications. Our Board is meeting for their annual meeting in a few weeks and this is a topic of discussion on our agenda, I would like to bring suggestions and examples of other nonprofit organizations to help aid in their discussion.
Thank you in advance for your response.
Comments
-
Great topic, Carissa. Kudos to you for even catching that AI response. How did you know? We have not developed any policies or guidelines re AI but I will be following this conversation to learn from others.
0 -
Honestly, it was the way that the response was formatted I thought was odd to have summary bullet points for an essay. We have another that was also very eloquently written and redundant (which AI has a tendency to do as well).
1 -
I just came to compass to post this same question. On one scholarship, we have 11 essays that are nearly identical. I would say 90% or more. What are you using to test the work? We will be looking at a policy on it soon and inserting specific AI language in next year's application. However, I consider it falls under plagiarism and am seriously considering disqualifying those applicants for that scholarship.
Would love to hear other's experiences, thoughts and policies.
1 -
I copied our Scholarship essay prompt to Chat GPT and it generated a nearly identical response, that's what gave it away.
1 -
We do not currently have an AI policy but have the students "check" a box that certifies the above work is their original work otherwise staff can remove their application from consideration.
We use the website: https://gptzero.me/ to detect for AI work.
4 -
Great suggest and information, Katie! Thank you!
2 -
We have just introduced an AI Policy. However, it addresses staff use of AI tools and essentially allows for the use of AI but all generated work must be reviewed by humans before being launched publicly.
I tested a response to one of our scholarship questions into a website called Copyleaks. I very quickly used up my free option, but I know there are other trackers if you Google them. Of the six I tested, only one came back as AI generated. As this is the first year we are even thinking about AI I'm not sure if this is a huge issue for us at this point.
I recently attended a webinar hosted by the Technology Association of Grantmakers (TAG) where they shared a couple of IT policy templates, including one for AI. I'm attaching it in case it helps.
Michelle Collins
Oakville Community Foundation
5 -
Has anyone considered doing away with essays? Maybe replacing it with one or two short response questions so that it would be more trouble to bother with AI generation than answering the short questions.
3 -
@AlexQuesada we considered a video response instead of essays but unfortunately with the number of applications our committee reviews, video responses just aren't practical. I'm leaning toward your above suggestion - short answers but encouraging students to discuss specific projects/programs they've completed, trying to force answers into a more "personal" and local direction.
3 -
@AlexQuesada - I believe you are on the right track. We have a mix of personal statement type of responses, some of which are generally longer essays and some that are more like short answer responses. I did find this year that the short answer responses were much less likely to be AI generated and more personal in general. However, reviewers were more frustrated because short answers responses usually meant more to read and score. There is still work to do on that front, but I specifically flagged that shorter responses were usually not a full AI response for reviewers when debriefing the process this year.
2 -
I recognize this is an older post now, but thanks for posting this @CarissaGump - this is something we have been dealing with a lot lately, and we are thinking over the AI policy that we have yet to create but desperately need now. And thank you @MichelleCollins for sharing the policy from TAG.
I posted something on this general topic a couple of years ago (https://community.foundant.com/funders_community_foundations/discussion/2229/new-issue-a-i-or-ai-generated-responses-to-grant-questions-what-do-you-do) and learned about ZeroGPT to test responses. We are a private foundation using GLM, so we are using a simple sequence of narrative questions in our applications. In the last 6 months though, our open process has received hundreds of requests using AI - which have generally thought was fine to help save time for applicants, but not when the entire request is written this way. We don't know how to draw the line on whether we should have a flat-out "No AI" policy, or put a statement out that says something like "applying for a grant when you are not a fit, and using AI to try to fit" is not a good solution. Our reviews are getting considerably more complicated and we are conflicted.
We have considered (though not officially implemented) the teacher's hack of adding in an obscure and unique element within the question prompt using transparent text, so that -if they are copying and pasting and not checking their work- will make it apparent that the request is a quick attempt at applying. This way we could potentially catch the issue before spending too much time analyzing the request. We are not sure if this perhaps a passive aggressive approach, or, considering the circumstances of our limited ability to process such a high volume of requests, this is still a fair means of processing applications. We do wish to be using Trust Based Philanthropy practices, but I am not sure if others using TBP practices are getting this high volume and are also stuck.If anyone has the solution, we are all ears!
I will check out the recording from the Foundant Summit that was shared on another post, thanks @JakeSharp
https://hub.foundant.com/vimeo-all-product-and-corporate-videos/summit-2024-keynote-with-amrit-saxena
1
