What Do We Know About the Edtech Services That Watch Students?

Read also

As technology continues to advance at an unprecedented rate, the use of Artificial Intelligence (AI) has become increasingly prevalent in many areas of our lives. From self-driving cars to virtual assistants, it seems like AI is making our lives easier and more convenient. However, as with any new technology, there are potential drawbacks and concerns that need to be addressed. Recently, EdSurge had the opportunity to speak with a group of passionate teenagers who are lobbying against the dark sides of AI. This conversation led them to ask an important question – what do we know about the companies that schools use to monitor students?

With the rise of AI in educational settings, schools have started to use various tools and programs to track and monitor student behavior, performance, and even emotions. These companies claim to provide valuable insights and help educators better understand and support their students. However, with little transparency and regulation, there are valid concerns about the privacy and ethical implications of using these AI-powered systems.

The teenagers EdSurge spoke with were particularly concerned about the potential negative impact of AI on marginalized and vulnerable students. They fear that these systems could lead to discrimination, perpetuate biases, and even violate students’ rights. In some cases, AI algorithms have been shown to make biased decisions based on race, gender, and socio-economic status, which could have serious consequences for students’ academic and personal growth.

One of the main issues highlighted by the teens was the lack of transparency and accountability of these AI companies. Many of these companies do not disclose their data collection and processing practices, making it difficult for schools and parents to understand how their information is being used. This lack of transparency also makes it challenging to identify and address any potential biases in the algorithms used.

Another concern raised by the teens was the potential for these AI systems to have a negative impact on the learning environment. The constant monitoring and surveillance can create a high-stress environment for students, leading to anxiety and decreased performance. Moreover, some of these systems claim to be able to detect emotions, which the teens argue is an invasion of students’ privacy and could lead to misinterpretation and misjudgment.

So, what do we know about the companies that schools use to monitor students? The truth is, not much. Many of these companies operate in secrecy, with little oversight or regulation. This lack of transparency and accountability is a significant cause for concern, especially when dealing with sensitive information about students.

However, it’s not all negative. Some companies are taking steps towards transparency and accountability. For example, IBM’s AI education tool, Watson Education, has been designed with privacy and transparency in mind. The company has a publicly available privacy policy that clearly outlines how they collect, use, and protect student data. They also have a team of experts that actively monitor and address any potential biases in their algorithms.

Other companies, such as Google, have also made efforts to be more transparent about their data collection practices. For example, they recently launched a dashboard that allows users to see and control the data collected by their Google account.

Additionally, some educators and policy makers are starting to take notice of these concerns and push for more accountability and regulation of AI in schools. In 2019, the California State Assembly passed a bill that would ban the use of facial recognition technology in schools until proper regulations and guidelines are in place.

In conclusion, while there are valid concerns about the use of AI in schools, it’s essential to recognize that not all companies are the same. Some are taking steps towards transparency and accountability, and it’s crucial for schools and parents to do their research and carefully select the tools and programs they use. Furthermore, it’s essential for educators and policy makers to work towards creating regulations and guidelines that protect students’ privacy and ensure ethical use of AI in educational settings. As technology continues to advance, it’s up to all of us to ensure that AI is used to benefit students, not harm them.

More news