Ask AI to draft the AI regulation. Is it hard?

Like the Covid-19 pandemic, my approach is to observe closely, do some research and fact check, engage in constructive discussions and consider both good and bad outcomes that could happen with this artificial intelligence (AI)  “ChatGPT phenomena”, and then make my move.

Advancement in technology should be considered an indicator of knowledge growth; naturally, it is a good evolution for the generation or group of people. The use of ChatGPT, as I observed, did help educate users on the ability to think about how to ask the right question. This is one of the few positive elements that ChatGPT and AI bring.

But looking at how things are happening now and possibly in the near future raises some concerns for me and a few others. It is concerning how the public is using ChatGPT and other AI-based tools. We can address these concerns in various aspects, such as security, privacy, human knowledge growth, economics, health, etc. But in my view, we desperately need rules and policies on how ChatGPT or similar tools that use AI-driven technology can be used.

According to, Italy is now banned ChatGPT.

ChatGPT is now banned in Italy.

The country’s data protection authorities said AI service would be blocked and investigated over privacy concerns.

The system does not have a proper legal basis to be collecting personal information about the people using it, it said. That data is collected to help train the algorithm that powers ChatGPT’s answers.

It is just the latest censure of ChatGPT, and the artificial intelligence systems underpinning it that are made by creators OpenAI. Italy’s decision came days after a range of experts called for a halt on the development of new systems, amid fears that the rush to create new AI tools could be dangerous.

In a recent incident, it was reported that ChatGPT was linked to an alleged leak of confidential information at Samsung. On three separate occasions, employees entered sensitive data into the AI model.

These two events are just the starting point, with more we expect to happen soon.

Few people have started taking advantage of this ChatGPT trend and offer a workshop, classes or webinars promoting the use of ChatGPT. Some are quite good, while others are average regarding knowledge, understanding, or experiences with this technology. Indeed it is a reasonable effort to promote new technologies that could benefit others, but ONLY IF the presenter addresses both capabilities of ChatGPT and the risk user should be aware. The situation gets ugly when the audience or student blindly follows and risks themself exposed to various digital risk such as being hacked, infected with malware or a privacy data breach (I plan to write other article regarding this).

I start hands-on in using ChatGPT via Telegram bot. Below is an example of a question that I ask:

Ask AI to draft the AI regulation. Is it hard?

Well, we’ve all heard about the potential of AI to revolutionize our lives, but it’s important to remember that this technology comes with its risks. To ensure that these risks are minimized, it’s essential to have clear regulations and guidelines in place when using AI technology. Here are 20 regulations for using AI technology that should be kept in mind:

1. AI must have clear objectives and be used specifically.

2. AI must be designed with human values in mind and not be used to manipulate or harm people.

3. AI must be accurately tested and evaluated before implementation.

4. AI must be transparent and open so that users can understand how it works.

5. AI must not be used to replace or reduce human labour.

6. AI must be designed with privacy in mind and not be used to invade or exploit personal data.

7. AI must be designed to be secure and not be vulnerable to external attacks.

8. AI must be designed to be resilient and not be prone to disruption.

9. AI must be designed to be accountable and have transparent processes for handling errors and mistakes.

10. AI must be designed to be fair and not be biased against any particular group or individual.

11. AI must be designed to be responsible and consider ethical and moral considerations.

12. AI must be designed to adhere to applicable laws and regulations.

13. AI must not be used to interfere with elections or other democratic processes.

14. AI must not be used to discriminate against any person or group.

15. AI must not be used to infringe on any individual’s civil rights.

16. AI must not be used to spread misinformation or propaganda.

17. AI must not be used to target vulnerable people or those without the capacity to make informed decisions.

18. AI must be designed to be updated and improved over time.

19. AI must be designed to be able to be shut down in the event of an emergency or when no longer needed.

20. AI must be designed to comply with all applicable national and international standards.

By adhering to these regulations, we can ensure that AI technology is used responsibly and ethically and that it is used in a way that maximizes its potential to benefit society.

Ask AI to draft the AI regulation. Is it hard? I am aware there are effort already working on designing a policy of the usage of ChatGPT or AI tools, especially by universities. A real people brains and experience working in various field of expertise to setup the policy (not like we ask ChatGPT to write us the policy and we blindly follow). This is a good effort and we are looking forward for national-based policy on AI very soon. Its not about restricting AI, but must include educating and set AI usage boundary.

We celebrate AI advancement, and like the saying, with great power come great responsibilities. That is why we need to think and consider proper regulation for AI for a safe and productive humanity future, before we move future (or too late).

Other Related Readings:

[1] Elon Musk joins call for pause in creation of giant AI ‘digital minds’
[2] Germany considers following Italy in banning ChatGPT
[3] We tested a new ChatGPT-detector for teachers. It flagged an innocent student.
[4] A top AI researcher reportedly left Google for OpenAI after sharing concerns the company was training Bard on ChatGPT data
[5] Publishers demand to be paid for AI using their work
[6] OpenAI: Sorry, ChatGPT Bug Leaked Payment Info to Other Users
[7] Bill Gates says the ‘age of A.I. has begun’ and it could either reduce inequity or make it even worse
[8] No tech in the classroom: Professor considers going ‘back to basics’ as ChatGPT gains popularity
[9] Does your company need a policy for AI like ChatGPT?

    Comments are closed

    I am using and recommending the following
    Subscribe to my newsletter
    The latest news, articles, and resources, sent to your inbox monthly.
    © 2024 All rights reserved.