This article appeared in Knife Magazine in January 2026.
Know Your Knife Laws – AI and Knife Laws
By Anthony Sculimbrene, Attorney and Knife Expert
“Google: Are automatic knives legal in Hawaii?”
“Yes, automatic knives are legal in Hawaii, but with restrictions. A law change in May 2024 removed the ban on switchblades and gravity knives, making them legal to own and openly carry. However, concealed carry of these knives is still prohibited.”
This is the answer you get from Google’s large language model AI called Gemini. Is it right? Well, like Obi-Wan told Luke: “That’s true, from a certain point of view.” In reality, Hawaiian 2nd Amendment jurisprudence is basically unintelligible, as the Hawaiian Supreme Court has refused to adopt Bruen. So, yes, a law was passed that eliminated the ban on automatic knives, but the law’s legality is unclear, given the refusal to adopt Bruen. When you add in the Hawaiian legislature’s willingness to change a law to bar a federal suit, AND the fact that there is no consensus on the legal definition of “concealed,” the Gemini answer is best seen as truthy but not exactly true.
So, are AI answers about knife laws useful? Let’s try another one with an unquestionably clear answer.
“Google: Are automatic knives legal in Massachusetts?”
“Yes, automatic knives are legal to carry in Massachusetts following an August 27, 2024, ruling by the Massachusetts Supreme Judicial Court that struck down the state’s ban on switchblade knives as unconstitutional. The court found that the state’s nearly 70-year-old prohibition on switchblade knives violated the Second Amendment.”
BING! Gold star Gemini. This is correct.
All of this highlights one of the current limitations of large-language model AIs (LLM). They are excellent at scanning large volumes of information and producing answers to discrete questions, so long as those discrete questions have discrete answers. In the case of Massachusetts, Commonwealth v. Canjura was a case with a very simple holding: After Bruen, the state ban on automatic knives is unconstitutional. In Hawaii, where the law change is filtered through two cases, Wilson and Teter, the discrete question does not have a discrete answer, so Gemini’s answer is not just vague; it is both misleading and inaccurate.
This is due to how LLMs work and how modern AI architecture, neural network AI, solves problems posed by questions. For an excellent overview of how LLMs and neural network AI work, listen to the podcast Persuasion, titled “Geoffrey Hinton on Artificial Intelligence.” In the episode, Hinton, who solved two major problems in computing that enabled the development of our current AI models (including Gemini), discusses in plain terms how AI and neural networks work. In essence, for language models, neural networks are trained to predict the next word in a given sentence based on the context of that sentence and the larger document. LLMs are making millions or billions of highly refined guesses about the next word (and therefore the next sentence) based on both what came before, in this case a question, and what language is used in relation to those words organized together in large samples of language (hence, the large-language model). When these guesses produce answers similar to those in the AI’s database of information, it deems them correct and spits them out to the user.
LLMs are predictive. They don’t “know” what the correct answer is (which begs the question of whether humans do, but that is another issue), but they can guess really, really well. And if the answer is something discrete, like a famous quote, they can find it very fast and produce it with a very high degree of fidelity. Legal holdings, especially straightforward ones like the one from Canjura, are essentially quotes, and LLMs can chew up those problems very fast. But synthesizing cases, statutes, and constitutional provisions, and giving weight between those things is something that LLMs currently struggle to do. It would be especially vexing when, like right now, the Hawaiian Supreme Court seems to be in open revolt on the Second Amendment.
This isn’t to say that AI couldn’t do these things. Correcting weights between relationships is one of the things that Hinton did to help improve AI, so having “tunable” weights between words specifically for legal analysis doesn’t seem far off. But for now, Gemini’s answers are like those you get from very small children—they are literally true, but not all that useful outside the most basic cases.
For more accurate legal advice, it’s best to go to a lawyer, especially when your money and liberty are at stake. If you have to use internet sources, at least use those that were developed by human lawyers who know the law in that given area. AKTI’s information is all curated by human lawyers who know the law, either Dan Lawson or me.