Updated January 1, 2023
AI should be used ethically. engage can perform impressive tasks, but it should not be allowed to operate without oversight. We believe that individuals and nonprofit organizations that use AI should adhere to internal and external guidelines on its use.
- Our AI will not be used to analyze vulnerabilities for targeted advertising.
- Our AI will not be used to create deceptive or manipulative ads.
- We will take steps to reduce biases in AI training.
AI should be sustainable. In addition to improving productivity and revenue for nonprofit organizations, AI should also enhance the quality of life for people including staff, donors, and the general community of stakeholders. As AI becomes more capable, organizations have a responsibility to ensure that its use does not negatively impact people.
- If our AI automates part of someone’s job, that person should benefit.
- Our AI should empower the people interacting with it.
- The content produced by our AI should be supervised and edited as necessary.
AI should be democratic and community oriented. We will be transparent about the capabilities and limitations of our platform, and we will make it clear to our users that the output of the AI should be reviewed and edited as needed before being published.
- We will be responsive to concerns raised by our users or members of the public about the potential impact of our platform on society and individuals.
- We will regularly review and update our ethics policy to ensure that it reflects the latest thinking on the responsible use of generative AI.
- This policy is an early attempt to set good ground rules in an unregulated artificial intelligence industry and the absence of its use in the nonprofit sector. We will update the policy from time to time as the technology, and its use cases, evolve.
Biases and content acknowledgment
As with any technology, it is important to acknowledge that the use of generative AI can be subject to bias and can produce content that may be offensive or inappropriate. It is the responsibility of the users of such technology to be aware of this potential and to take steps to mitigate it. This may include carefully reviewing and filtering the output of the AI, as well as implementing measures to ensure that the training data used to develop the AI is diverse and representative. Additionally, it is important to recognize that the use of generative AI can have social and ethical implications, and it is the responsibility of users to consider these implications and to act in a responsible and ethical manner. These models may output content that reinforces or exacerbates societal biases.
Contact Us
If you have any questions about this policy, our data policy, or anything else, you can contact us.