We’re pivoting our class to our final projects: You Are The Future and Humans+. These projects combine everything we’ve learned this semester for imagining futures and using AI to expand our thinking. Here’s what students want to talk about as we move into the course’s final phase.
How seriously will people take AI solutions? For example, my You Are the Future project is based on addiction to technology, and my solution is entirely thought from AI. Will people, schools, and counselors take this solution as a real possibility or would they block it out since there is little human touch?
When will ai be able to outsmart human intelligence?
How can AI be used to bridge the long-existing learning gaps in education/disparities in access to quality education?
Let’s talk about how AI or automatic driving impacts traffic safety in the future.
We only have to do either Your are the future OR Humans+ right?
Should we focus our presentation information based off of our poster or the work we did preparing for the poster?
One thing that interests me about the hypothetical future of AI: what if you could create an artificial consciousness of any individual, using their own memories, thoughts, and opinions as training data? What if you could create a chatGPT like interface with that data, effectively keeping loved ones alive long after they’re gone?
How can we effectively manage the impacts of AI on jobs, ethics, healthcare, education, and privacy while ensuring fair access, responsible innovation, and protection of human values?
Drake just released a diss track using AI voices of Snoop Dogg and Tupac. Obviously, Tupac is dead, and can’t protest really, but is this ethical? Snoop Dogg had no previous knowledge of Drake’s intentions to do this.
How will advancements of AI change the way that we retain information/affect our memory?
Can artificial intelligence be ethical? Is it possible to create a machine that can feel like a human?
Will AI over run us or will we have some sort of cable to pull when they are took powerful.
We have got a big week coming up and I’m excited about it. Finalizing our revisions today should be pretty doable and the entire concept of being able to do that is pretty nice. A little nervous about getting this group project in on time this Thursday. I understand we just have to get something in but its a big week!
What are the ethical considerations surrounding the use of AI in autonomous vehicles, and how can we ensure that these technologies prioritize safety and accountability?
When creating a final prototype in the You are the Future project, should you be focusing on a combination of the likely and probable?
What are some of the potential consequences of AI bias?
Is there a way to let AI make decisions for us without losing our human decision-making capacity?
Can you tell us more about making our own gpts?
What are the most effective strategies or techniques for ensuring that an AI system can accurately understand and execute commands in a business setting without encountering difficulties or misunderstandings with certain phrases or questions?
What can AI really do to this world? meaning in the future is it possible AI can genuinely take over this world in a way that humans become completely worthless.
What are the main challenges in training generative models on large-scale datasets?
With the increasing integration of AI in healthcare, we are seeing promising applications in areas like medical imaging, disease prediction, and personalized treatment planning. However, as AI becomes more deeply embedded in critical healthcare decisions, what safeguards and regulations do you think need to be in place?
Who will be responsible for major AI errors?
How can I use AI for future jobs? Can AI be trusted for the business world when it comes to trading, stocks, etc?
In what ways do you personally think that AI could influence the way we live our everyday lives in the next 10 years?
I found this super relevant for our class: Walmart selling AI images.
I believe this will always be part of our society in some ways, just presenting itself in different forms. The responsibility falls on the human who creates, prompts, and promotes the AI error. There will always be accountability. It is up to the humans to decipher what is an error.
Where do we draw the line with jobs being done by AI and humans? what becomes too little work for a person to do to be qualified for a job.
In the future, how much will / should AI be apart of the everyday person’s life?
Do you think AI should be used to predict and prevent crime, even if it means sacrificing some privacy? What are the potential risks and benefits of “predictive policing” AI systems?
As my group looks more into Neuralink and the implementation of AI and other technologies into humans, I wonder how it’s going to affect younger generations of kids and society as a whole.
I have been wondering a lot recently about the practicality of opinion based service workers. Do you think that people will still trust a human in the long run even if a machine might have more statistical data, just because of the argument of life experience vs. statistical data? I believe that most helping professions will eventually lose their success because they can be replicated by GPTs and even eventually the emotional appeals can be learned by the machines. What do you think about this hypothesis?
I know you don’t use your final exam period, but will we have any sort of final presentation or anything “final” related the last week of class? Or will the last week of class just be normal class?
How will AI affect people’s everyday lives and how people are finding ways to abuse the power it has.
How seriously will people take AI solutions? For example, my You Are the Future project is based on addiction to technology, and my solution is entirely thought from AI. Will people, schools, and counselors take this solution as a real possibility or would they block it out since there is little human touch?
When will ai be able to outsmart human intelligence?
Be the first to comment.