We all know Artificial Intelligence (AI) is here to stay, and President Pine’s latest email just made it official with the implementation of an AI Commission, who is charged with exploring, evaluating, and providing recommendations on the responsible development, deployment, and ethical use of AI technologies within the university community. AI is making waves in higher education, from personalized learning experiences to automating admin tasks; but as we dive deeper into this digital transformation, it’s important to remember that with great power comes great responsibility, and a few [cyber]security headaches, let’s be honest.
While AI presents exciting possibilities, it also introduces a range of risks, especially when it comes to data security. From protecting your personal information to ensuring the tools we use don't go off the rails, it’s up to all of us, students, faculty, and staff, to stay alert and secure as we integrate this technology into our daily lives.
Before you dive into using AI for note-taking, research, or even just scheduling your day, it's important to understand the potential flaws and risks associated with these tools, AI isn’t foolproof:
- AI can get things wrong: It’s not infallible and can “hallucinate” incorrect facts (yikes).
- AI can be biased: If it’s trained on biased data, you could get biased results, which isn’t the ideal scenario in higher-ed.
- AI needs tons of computing power: To work properly, AI requires massive amounts of data and processing power; which we will discuss in future topics.
- AI can be manipulated: It’s vulnerable to malicious tricks, whether through “injection attacks” or encouraging toxic content.
Some of the most pressing security concerns around AI include its potential to be used in cyberattacks. Generative AI can create sophisticated, adaptive malware that evolves over time, making it harder to defend against. Additionally, AI can be exploited to automate the delivery of malware, making cybercrime more efficient and dangerous. Data privacy is another major issue, as AI tools that process or store sensitive information, such as chat histories or personal data, could inadvertently expose that information. We’ve already seen this happen with tools like ChatGPT, where data leaks have raised alarms. AI is also vulnerable to data manipulation; if malicious actors feed false information into an AI system, it can lead to dangerous outcomes.
There’s also the risk of impersonation, as AI can create realistic fakes of voices or images, which could be used to deceive people or damage reputations.
🚨Take Note 🚨AI note-taking tools (like the ones that come with conference call software) are easy to use but often come with hidden risks. To protect yourself and your institution, you should begin by using UMD-vetted and approved tools . When searching for new tools, it's important to ask the right questions when using AI tools. For example, do you know where your data is being stored and who has access to it? Understanding the potential consequences of using these tools can help ensure your personal and institutional data remains safe. We often use AI without thinking about how it might handle our data, but it’s essential to be mindful of these risks.
But hey, let’s not fear the robots! There are simple steps we can all take to minimize risks and use AI safely. For instance:
- Avoid sharing sensitive info with AI: Keep personal or institutional data out of AI systems whenever possible.
- Backup your data and encrypt it: Ensure your files and information are secure with strong encryption and reliable backups.
- Take AI security training: Stay informed about AI risks (Hint: This year’s “Defend Your Shell” training is available in your Workday dashboard!).
- Limit access to sensitive data: Make sure only the right people have access to institutional data, especially when dealing with corporate information or autonomous systems.
- Familiarize yourself with AI guidelines: Read through the Guidelines for the Use of Generative Artificial Intelligence (GenAI) Tools at UMD and don’t hesitate to reach out to the IT Service Desk if you have any questions.
In conclusion, AI has the potential to revolutionize higher education, but it’s up to all of us to handle this powerful technology responsibly. By understanding the risks and taking proactive steps to secure our data and systems, we can embrace AI in a way that’s safe, ethical, and effective. Ultimately, the success of AI on campus won’t just be about how innovative it is—it will be about how securely and thoughtfully we integrate it into our daily lives.
