Ekata Deb
5 min readNov 24, 2023

How to remove AI Detectability Index from your NLP/LLM generated contents?

Open source: Google Image

Here, I shall give you a brief idea on creating an Authentic and Original content, which stands as more Humane even though you might have likely used AI-driven Software to write it.

I understand that creating an 100% humane content in the age of technology and machine learning stands as almost impossible. We can now hardly accept a day without GPTs especially for those of us where

Content is the king.

When the entire world is espoused with Chat Gpt 2, 3, 3.5, 4, Bard, LamDa and many similar Large Language Models (LLM) and Natural Language Processing (NLP) models, we relentlessly are becoming Machine-Dependents.

Now, may be due to too much pressure on us for academic publication — in UGC care listed , web of science or Scopus indexed journals or may be due to agressive publication competition we push ourselves into or our honest shortage of time, we tend to get along with the easy access to present day academic resources which are nothing but AI-DRIVEN softwares.

Commonly used such AI — writing tools in my opinion are Quiltbot, Scribd, Grammerly, ChatGpt 3.5 and 4, Quetext, Duplicheker, Open.Ai, Dalle, LamDa, Bard etc. Apart from the already available web contents on our choice/ area of work found in html.www links, we now go to these LLM/NLP models, get our content ready. If found that there are less of relevant information, we then go searching across the Internet and find apt web contents, copy them and start our academic operations. We paraphrase, summarise, re-write, proofread our content within couple of minutes to few hours using AI driven softwares like Quiltbot, Grammerly etc and accordingly put more apt information.

By this time, we can almost have a 90% ready content at hand, and then go referencing websites like Scribrr, and copy + paste all the weblinks and get our list of bibliography ready in our choice of referencing style like APA-7 or APA-6 or MLA or Chicago etc.

The last part is to put the entire content into softwares like Turnitin or Originality.Ai to pass the academic integrity test and reassure ourselves of having <5% Plagiarism index. But what we see — and then again see, this time with raised eyebrows..

Open source : Google Image

Our entire content is playing Holi ( festival of colours in India, specially happening with the advent of Spring) with orange, red, brown, violet and Green.

So, what happened actually? What did we find out? Why our content is filled with colours? May be it’s not Holika Dahan, but definitely AI-dahan. [Holika is mythological character, and Holika Dahan is a part of the Holi festival where we use symbolic burning of the demon Holika and Dahan meaning ending of all evils] .

Why such ? May be we got <5% plagiarism content or may be our content shows 0% plagiarised, yet we scored too high almost 99% in AI Detectability Index.

What does this mean? Our entire content prepared and our entire effort in past few hours all went null and void, and we didn’t create anything original.

So, does it mean we shouldn’t use AI softwares today? Or stop using academic writing tools and start writing everything manually?

The answer is a big NO. It’s always the

Smart Work than Hard work which matters the most.

So, what should we do now, keeping the ramifications of using AI intact ?

We can actually write the entire research manuscript or may draw a super level anatomy of our intended research from GPT 3.5/4 or any other good LLM/NLP models. NO OFFENCE on it.

May be we can even try to conduct our First level paraphrasing and then proof-reading using AI-driven software, eg. Quiltbot, Grammerly etc.

Get the entire AI driven content printed ( advisable in Hard form, you can also do Print-pdf ) and then start para-phrasing it manually. I have detailed this process here in this website : http://www.ekvisquotes.com

Summarily saying we need to start parsing across/cross sectional/directional in between words, break sequence, find similar words, replace them with synonyms etc .

Then we need to type our entire content in a MS word with out running our internet connection and/or atleast opening any browser where AI -LMS/NLP are open. We need to conduct proof reading manually and may do re-writing also, if needed..

Understand that you should never copy+paste and then para-phrase or re-write contents available from websites directly. For example say, open source articles and web contents from internet should never be used as references any more.

Only preferred sources of academic references can be any written documents which are available in hard form or pdf ( authentic & high quality journals) .

Last but not least may be GPT is growing exponentially in present time and can be held responsible for galloping so many market jobs, yet Human Intelligence is way too far than Artificial Intelligence. Rather than having a complete ban on such AI tools, one can responsibly work using ChatGPT. Legislature, Governing Authorities, Regulatory bodies may now need to rethink on policy changes while making rules on new academic publishing using AI driven software, allow a certain level of AI Detectability Index in the contents and many more.

For more details on the suggestive policy changes relating to publications using AI-DRIVEN softwares, do visit — https://www.fiatlexica.com.

Now what I believe, high paying jobs will be now created and open for more humane editors, proofreaders, copy-editors for referencing, styling, formatting ...

Do you think you can prove your contents to be more humane ? Can you surpass the AI Detectability & Plagiarism Index ? Join our 3 days — Live courses to learn —

“ Writing more humanely using AI driven software ”

Available from December 2023 at — @Ekvis Quotes

And @Fiat Lexica

Ekata Deb
Ekata Deb

Written by Ekata Deb

Firm Believer of Utilitarianism, Existentialism, Realism, with an oxymoron Spirit of Idealism & Nihilism. Practice Stoicism & Vipassana.

No responses yet