Cognixion AI and NeuroEthics policy
Cognixion is a company that develops and delivers innovative solutions for human communication, cognition, and interaction using artificial intelligence (AI), augmented reality (AR), neurotechnology, and sensors. We believe that these technologies have the potential to empower people, enhance their lives, and contribute to the social good. However, we also recognize that these technologies pose ethical and social challenges that require careful consideration and responsible action. Therefore, we have established this policy to guide our use of AI within our company and to ensure that we uphold our values and principles.
Our AI policy is based on the following key pillars:
- Cognixion Ethics & Human Agency Mandate: We commit to respect the dignity, autonomy, and rights of all human beings, and to design and deploy our AI solutions in ways that promote human agency, well-being, and inclusion. We will not use or support the use of AI for any purpose that violates human rights, harms human dignity, or undermines human agency. We will adhere to the highest ethical standards and comply with all applicable laws and regulations when developing and deploying our AI solutions.
- Cognixion Data Privacy & Stewardship: We commit to protect the privacy, security, and integrity of the data that we collect, store, process, and share as part of our AI solutions. We will only collect, use, and disclose data for legitimate and lawful purposes, and with the consent and knowledge of the data subjects or their authorized representatives. We will implement appropriate safeguards and measures to prevent unauthorized access, misuse, loss, or disclosure of data. We will respect the rights and preferences of the data subjects regarding their data, and provide them with transparent and easy-to-use mechanisms to exercise their rights.
- Cognixion Description of Bad Actors/Use Cases: We acknowledge that our AI technologies can be misused or abused by bad actors or for malicious purposes, such as deception, manipulation, coercion, discrimination, exploitation, violence, or terrorism. We will monitor and evaluate the impacts and outcomes of our AI solutions, and take proactive steps to prevent, detect, and mitigate any harmful or unethical use cases. We will cooperate with relevant authorities and stakeholders to report and address any incidents or concerns involving our AI technologies. We will not knowingly or willingly collaborate or partner with any individuals or organizations that intend to use our AI technologies for harmful or unethical purposes.
We expect all our employees, contractors, partners, and customers to abide by this policy and to share our commitment to ethical and responsible use of AI. We will provide regular training and education for our staff and stakeholders on the ethical and social implications of AI and our policy. We will review and update our policy periodically to reflect the evolving nature of AI and the feedback from our stakeholders. We will communicate and disclose our policy publicly and seek to engage in constructive dialogue and collaboration with other actors in the AI ecosystem.
Cognixion Ethics & Human Agency Mandate
We commit to respect the dignity, autonomy, and rights of all human beings, and to design and deploy our AI solutions in ways that promote human agency, well-being, and inclusion. We will not use or support the use of AI for any purpose that violates human rights, harms human dignity, or undermines human agency. We will adhere to the highest ethical standards and comply with all applicable laws and regulations when developing and deploying our AI solutions.
We believe that human agency is a fundamental human right and a core value of our company. Human agency is the ability of individuals to act independently and make their own choices, based on their own values, preferences, and goals. Human agency is essential for human flourishing, creativity, and empowerment. We recognize that AI technologies can either enhance or diminish human agency, depending on how they are designed and used.
We strive to create and deliver AI solutions that enable and augment human agency, rather than replace or constrain it. We aim to empower our users and customers with AI tools that enhance their communication, cognition, and interaction capabilities, and that respect their diversity, individuality, and autonomy. We seek to foster trust, transparency, and accountability in our AI solutions, and to provide our users and customers with meaningful control, feedback, and choice over their AI experiences. We endeavor to avoid or minimize any negative or unintended impacts of our AI solutions on human agency, such as bias, manipulation, coercion, addiction, or isolation.
We acknowledge that ensuring human agency in AI is a complex and dynamic challenge that requires continuous learning, reflection, and collaboration. We will engage with our users, customers, partners, and other stakeholders to understand their needs, expectations, and concerns regarding human agency in AI, and to incorporate their feedback and insights into our AI design and development processes. We will monitor and evaluate the performance and outcomes of our AI solutions, and ensure that we have mechanisms to address any issues or problems that may arise. We will participate in and contribute to the broader AI community and society, and support initiatives and efforts that promote human agency in AI.
Cognixion Data Privacy & Stewardship
We recognize that data is a valuable and sensitive asset that deserves our utmost care and respect. We also acknowledge that some of the data that we handle may contain personally identifiable information (PII), personal health information (PHI), or other types of information that are subject to specific regulations and standards, such as the Health Insurance Portability and Accountability Act (HIPAA) in the US and the General Data Protection Regulation (GDPR) in Europe. We are committed to being ethical stewards of the data that we collect, store, process, and share as part of our AI solutions, and to protecting the privacy and rights of our users, customers, partners, and any other data subjects.
Bad Actors
Some examples of bad actors and use cases that violate our policy are:
- Using our AI technologies to create or disseminate false, misleading, or harmful information, such as deepfakes, fake news, or propaganda.
- Using our AI technologies to impersonate, deceive, or defraud any person or entity, such as identity theft, phishing, or spoofing.
- Using our AI technologies to monitor, track, or spy on any person or group without their consent or authorization, such as surveillance, stalking, or harassment.
- Using our AI technologies to manipulate, influence, or coerce any person or group to act against their will or best interests, such as blackmail, extortion, or brainwashing.
- Using our AI technologies to discriminate, exclude, or harm any person or group based on their identity, characteristics, or beliefs, such as racism, sexism, or hate speech.
- Using our AI technologies to exploit, abuse, or harm any person or group physically, mentally, or emotionally, such as violence, torture, or trafficking.
- Using our AI technologies to endanger or threaten the security, stability, or sovereignty of any country, region, or the world, such as cyberattacks, warfare, or terrorism.
We will not tolerate or condone any of these use cases, and we will take the following actions if we discover or suspect any violation of our policy:
- We will immediately suspend or terminate the access or license of the violator to our AI technologies and services, and revoke any data or credentials associated with them.
- We will notify and cooperate with the relevant authorities and stakeholders, such as law enforcement, regulators, or civil society organizations, to report and address the violation and its consequences.
- We will conduct a thorough investigation and analysis of the violation and its root causes, and implement appropriate measures to prevent or mitigate similar incidents in the future.
- We will disclose and communicate the violation and our response to the public and our stakeholders, and seek to restore trust and confidence in our AI technologies and our company.