Final month, we began previewing DALL·E 2 to a restricted variety of trusted customers to be taught concerning the expertise’s capabilities and limitations.
Since then, we’ve been working with our customers to actively incorporate the teachings we be taught. As of immediately:
- Our customers have collectively created over 3 million photographs with DALL·E.
- We’ve enhanced our security system, bettering the textual content filters and tuning the automated detection & response system for content material coverage violations.
- Lower than 0.05% of downloaded or publicly shared photographs have been flagged as doubtlessly violating our content material coverage. About 30% of these flagged photographs have been confirmed by human reviewers to be coverage violations, resulting in an account deactivation.
- As we work to know and deal with the biases that DALL·E has inherited from its coaching knowledge, we have requested early customers to not share photorealistic generations that embody faces and to flag problematic generations. We consider this has been efficient in limiting potential hurt, and we plan to proceed the follow within the present part.
Studying from real-world use is an necessary half of our dedication to develop and deploy AI responsibly, so we’re beginning to widen entry to customers who joined our waitlist, slowly however steadily.
We intend to onboard as much as 1,000 individuals each week as we iterate on our security system and require all customers to abide by our content material coverage. We hope to extend the speed at which we onboard new customers as we be taught extra and achieve confidence in our security system. We’re impressed by what our customers have created with DALL·E thus far, and excited to see what new customers will create.
Within the meantime, you may get a preview of those creations on our Instagram account: @openaidalle.