Best Of
Re: AI Generated Responses from Applicants
We have just introduced an AI Policy. However, it addresses staff use of AI tools and essentially allows for the use of AI but all generated work must be reviewed by humans before being launched publicly.
I tested a response to one of our scholarship questions into a website called Copyleaks. I very quickly used up my free option, but I know there are other trackers if you Google them. Of the six I tested, only one came back as AI generated. As this is the first year we are even thinking about AI I'm not sure if this is a huge issue for us at this point.
I recently attended a webinar hosted by the Technology Association of Grantmakers (TAG) where they shared a couple of IT policy templates, including one for AI. I'm attaching it in case it helps.
Michelle Collins
Oakville Community Foundation
Re: Scholarships are in, now what
We do not use the UA so I don't how applicable our process would be if you use UA. During the first screening, we abandon incomplete/draft applications. I run reports for our different scholarships so it's easier to look through the applications for ones that don't meet the hard criteria of a specific scholarship (GPA, type of school attending, major, played sport, etc.). Applications that don't meet the hard criteria I'll draft deny at this point. I use the reports function a lot when we are screening and evaluating applications. I'll then review any remaining ones in the submitted bucket and mark them complete if all the information is there. Early in the application period, it's easier to review and mark complete as they come in but the flurry of applications the last two days is hard to keep up with. It's more efficient for me to run reports first and deny any that don't meet the criteria. I also draft deny as they are submitted if they clearly don't meet hard criteria (ie wrong major). Hope that helps a little.