Google 在華盛頓郵報的報導刊登後，先後強調 AI 是不可能擁有自我意識，以消弭大眾疑慮。同時再有幾個 AI 研究領域的學者都發聲反對 Blake 的說法，其中一個為前 Google AI 倫理團隊負責人的 Margaret Mitchell，她可是曾經因為公開指責 Google 員工組成過於單一而被裁走。於 Margaret 的 Twitter 發文中所說，LaMDA 並沒有建立自我意識，反之只是建立人們在表達意見時，使用不同文字線索的模型。另一 AI 科學家 Gary Marcus 更直指 Blake 的說法是「毫無根據的廢話」（nonsense on stilts）。
最後 Google 回應我們主站對 Blake 被辭退一事的查詢時，表示他們已經有針對有關的說法向 LaMDA 進行調查，但並沒有發現，然而 Blake 仍然固執地以違反員工和數據安全守則的方式發表意見。
As we share in our AI Principles, we take the development of AI very seriously and remain committed to responsible innovation. LaMDA has been through 11 distinct reviews, and we published a research paper earlier this year detailing the work that goes into its responsible development. If an employee shares concerns about our work, as Blake did, we review them extensively. We found Blake’s claims that LaMDA is sentient to be wholly unfounded and worked to clarify that with him for many months. These discussions were part of the open culture that helps us innovate responsibly. So, it’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information. We will continue our careful development of language models, and we wish Blake well.