In every field of specialized legal practice, attorneys proudly tout their industry expertise. Oil and gas lawyers highlight their knowledge of mineral rights, the rule of capture, and the byzantine regulations governing extraction operations. Entertainment attorneys understand the nuances of residuals, distribution windows, and talent agreements. Healthcare lawyers navigate HIPAA, STARK laws, and CMS reimbursement structures with ease. In these and countless other practice areas, industry knowledge isn’t considered optional—it’s the baseline expectation for competent representation.
Yet when it comes to technology law, something strange happens. Too many attorneys treat technical knowledge as unnecessary, even irrelevant. They approach software, artificial intelligence, and digital systems with a curious detachment, as if the underlying technology somehow doesn’t matter to the legal analysis. This attitude would be laughable in any other context. Imagine an oil and gas attorney who dismissed the importance of understanding how hydraulic fracturing works, or a maritime lawyer who considered knowledge of shipping operations to be beyond their purview.
The consequences of this technological illiteracy in the legal profession are far-reaching and often invisible to those outside the industry. Bad laws get drafted. Contracts fail to account for how technology actually functions. Litigation strategies miss critical technical facts that could change outcomes. And perhaps most troublingly, the attorneys perpetuating these failures rarely understand what went wrong.
This is why I built my practice around bridging law and technology. After more than fifteen years in software development — including seven years working with AI systems — I went to law school specifically because I saw how badly the legal profession needed attorneys who could actually understand the technology they were advising on. Understanding technology isn’t a nice-to-have; it’s fundamental to effective legal representation in any matter touching digital systems, software, data, or artificial intelligence. And increasingly, that means nearly every matter.
Understanding technology isn’t optional — it’s the baseline for competent legal counsel.
The TRAIGA Problem: A Case Study in Tech-Blind Drafting
Nothing illustrates the dangers of lawyers not understanding technology better than recent legislative efforts to regulate artificial intelligence. Consider the Texas Responsible AI Governance Act (TRAIGA), which Governor Abbott signed into law in June 2025 and takes effect January 1, 2026.
TRAIGA defines “artificial intelligence system” as “any machine-based system that, for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations, that can influence physical or virtual environments.”
Read that definition carefully. If you have a technology background — if you’ve ever written software, trained a machine learning model, or built a data pipeline — that definition should make you deeply uncomfortable. Because what it actually describes isn’t artificial intelligence in any meaningful sense. What it describes is virtually any software.
Let me break it down from a developer’s perspective. A machine-based system that infers from inputs how to generate outputs? That’s the fundamental description of computing. Every program takes inputs and generates outputs. The word “infers” might sound like it limits the definition to AI-specific processes, but inference in computing terms simply means deriving conclusions or results from data. A basic calculator “infers” the sum when you provide two numbers. A spell-checker “infers” corrections based on dictionary lookups. A thermostat “infers” whether to activate heating or cooling based on temperature readings.
The definition continues: outputs “including content, decisions, predictions, or recommendations.” This covers everything from autocomplete suggestions to database query results to Excel formulas. When your spreadsheet recommends a chart type based on your data, it’s generating a recommendation from inputs. When your word processor decides where to hyphenate a word, it’s making a decision based on rules and inputs.
And finally: outputs “that can influence physical or virtual environments.” This encompasses any software that does anything at all. A website that displays different content based on user preferences influences a virtual environment. An industrial control system influences a physical environment. Even a basic customer relationship management (CRM) system influences virtual environments by organizing and presenting data.
The drafters of this language almost certainly didn’t intend to regulate all software. They were trying to capture the novel capabilities of modern AI systems — large language models, autonomous decision-making systems, predictive algorithms that can perpetuate bias. Their intent is clear from the context of the legislation and the broader regulatory conversation around AI governance.
But intent doesn’t matter in statutory interpretation. What matters is the text. And the text of TRAIGA sweeps in far more than anyone likely intended because the people who drafted it didn’t understand technology well enough to write precise definitions.
Why This Matters for Your Business
If you’re a technology company operating in Texas — or selling products to Texas residents, or deploying software systems in the state — TRAIGA creates genuine compliance uncertainty. The statute prohibits using AI systems for certain purposes, including manipulating human behavior or discriminating against protected classes. These are reasonable policy goals. But because the definition of “AI system” is so broad, businesses now face questions about whether these prohibitions apply to their conventional software.
- Does your recommendation engine qualify as an AI system? Under a literal reading of TRAIGA, quite possibly yes.
- Does your automated customer service chatbot? Definitely.
- Does your dynamic pricing algorithm? Almost certainly.
- Does your basic inventory management software that predicts when to reorder supplies? Arguably so.
TRAIGA does provide some safe harbors, including substantial compliance with the NIST AI Risk Management Framework. But navigating these provisions requires understanding both the legal framework and the technical architecture of your systems. An attorney who doesn’t understand how your software works cannot effectively advise you on whether it falls within TRAIGA’s scope or qualifies for available defenses.
A lawyer who understands technology would have spotted these drafting issues. They could have advocated for more precise definitions during the legislative process. They could help you structure compliance programs that address actual regulatory concerns rather than treating all software as equally risky. They could identify which of your systems genuinely raise the policy concerns TRAIGA targets and which are swept in only by definitional overreach.
I challenge you to find another attorney who will point out these drafting issues in TRAIGA and explain why they matter. Most lawyers can’t do it because they don’t have the technical foundation to recognize the problems. They’ll tell you the law says what it says and help you comply, without ever questioning whether the law captures what it purports to regulate or creates perverse incentives through overbroad definitions.
The CFAA: Decades of Tech-Blind Consequences
TRAIGA isn’t an isolated example. The history of technology law is littered with statutes that created unintended consequences because their drafters didn’t understand the technology they were regulating.
The Computer Fraud and Abuse Act (CFAA), originally enacted in 1986, provides perhaps the most notorious example. The law prohibits intentionally accessing a computer “without authorization” or “exceeding authorized access.” The problem? Congress never defined what “without authorization” actually means.
This ambiguity has led to decades of interpretive confusion and prosecutorial overreach. Courts have struggled with basic questions: Does violating a website’s terms of service constitute “unauthorized access”? Can an employee who accesses workplace systems for improper purposes be prosecuted under the CFAA even if they were technically authorized to access those systems?
The consequences have ranged from tragic to absurd. Aaron Swartz, the brilliant programmer and internet activist, faced up to 35 years in prison and $1 million in fines under the CFAA for systematically downloading academic articles from JSTOR — articles he was authorized to access. The Department of Justice threw the book at him for what amounted to a terms of service violation, treating it as a federal crime. Swartz took his own life in 2013 while facing prosecution.
In the Lori Drew case, prosecutors charged a woman under the CFAA for creating a fake MySpace profile — essentially arguing that using a false identity violated MySpace’s terms of service and therefore constituted unauthorized computer access. The conviction was eventually overturned, but only after demonstrating how the statute could potentially criminalize routine internet behavior.
The CFAA’s vague language has been used by employers to threaten employees who access workplace systems for personal use, by companies to attack competitors engaged in legitimate web scraping, and by prosecutors to pile on charges in cases that have little to do with actual hacking. Tim Wu, the Columbia Law professor who coined the term “net neutrality,” has called the CFAA “the worst law in technology.”
The Supreme Court finally provided some limiting interpretation in Van Buren v. United States (2021), holding that “exceeding authorized access” requires accessing areas of a computer system that are off-limits, not merely misusing information from areas you’re permitted to access. But it took 35 years of confusion, inconsistent circuit court interpretations, and genuine human tragedy before the nation’s highest court could begin reining in a statute that was poorly drafted from the start.
All of this could have been avoided if the lawyers drafting the CFAA had understood how computer systems actually work, how authorization is implemented technically versus contractually, and what kinds of conduct the law should genuinely criminalize.
The Double Standard in Legal Practice
Here’s what baffles me about the legal profession’s attitude toward technology: in every other specialized field, lawyers are expected to understand their clients’ industries. Nobody thinks twice about this expectation. It’s simply the baseline for competent practice.
Oil and gas attorneys understand drilling operations, production sharing agreements, and the technical aspects of reservoir engineering. They know the difference between working interests and royalty interests, between wildcat wells and development wells, between conventional and unconventional extraction techniques. Texas even offers board certification in Oil, Gas, and Mineral Law, recognizing that specialized technical knowledge distinguishes competent practitioners in this field.
Healthcare attorneys understand clinical operations, reimbursement mechanics, and the regulatory frameworks governing medical practice. They can discuss diagnosis-related groups, meaningful use requirements, and the technical standards for electronic health records with fluency.
Maritime lawyers understand vessel operations, charter party agreements, and the international conventions governing shipping. They know port state control, flag state jurisdiction, and the technical certificates required for vessel operations.
In each of these fields, industry expertise is a selling point. Law firms proudly advertise their attorneys’ technical backgrounds, their industry experience, their ability to speak the client’s language and understand their business.
But technology law often operates differently. Too many attorneys approach software, AI, and digital systems as black boxes that need not be understood to be regulated. They draft contracts governing software development without understanding software development methodologies. They litigate patent disputes involving algorithms without understanding how those algorithms work. They negotiate data processing agreements without understanding database architecture or data flows.
And then they wonder why technology companies find their legal counsel unhelpful, why technology contracts so often fail to address the actual risks, why technology legislation so frequently creates unintended consequences.
What Tech-Savvy Legal Representation Looks Like
Understanding technology isn’t about being able to code, although that certainly helps. It’s about having sufficient technical literacy to ask the right questions, spot potential issues, and translate between technical and legal concepts.
When I review an AI system, I don’t just ask whether it “makes decisions” in some abstract sense. I examine the model architecture, the training data, the inference pipeline, and the deployment context. I distinguish between rule-based systems and machine learning models, between narrow AI applications and more general-purpose systems. I understand that a simple classification model trained on historical data poses different risks than a large language model capable of generating novel content.
When I draft software development agreements, I understand the difference between waterfall and agile methodologies, between monolithic and microservices architectures, between on-premises deployment and cloud-native applications. This technical understanding informs every provision, from milestone definitions to acceptance criteria to warranty terms.
When I advise on data privacy compliance, I understand how data actually flows through modern systems — from collection through processing to storage and eventual deletion. I know that “deleting” data from a production database is meaningless if copies exist in backups, logs, caches, and analytics systems. I can help clients implement genuine data minimization rather than theatrical compliance.
When I analyze cybersecurity incidents, I understand attack vectors, forensic artifacts, and the technical factors that determine whether a breach was preventable. I can evaluate whether security measures were reasonable in context, not just whether they checked the right compliance boxes.
This technical literacy matters because the law doesn’t exist in a vacuum. Legal rules interact with technical realities, and understanding those interactions is essential for effective counseling.
The Policy Consequences of Tech-Illiterate Law
The problems created by lawyers who don’t understand technology extend beyond individual client matters. They affect the entire legal and regulatory framework governing the technology sector.
When lawyers draft overly broad statutes like TRAIGA, they create compliance burdens that fall disproportionately on smaller companies without the resources to navigate regulatory ambiguity. Large technology companies can afford to hire teams of lawyers and lobbyists to influence legislative drafting and argue for favorable interpretations. Small startups and independent developers cannot. The result is a regulatory environment that favors incumbents over innovators.
When lawyers bring poorly conceived technology litigation, they create bad precedent that distorts the law for years. Cases get decided based on misleading technical analogies, incorrect factual premises, and judges who lack the technical background to evaluate the arguments presented to them. These decisions then constrain future courts and influence future legislation.
When lawyers negotiate technology contracts without understanding the underlying technology, they create agreements that either fail to protect their clients from genuine risks or impose impractical requirements that hamper legitimate business operations. Either way, the legal instrument fails to serve its purpose.
The technology sector has grown too important — to the economy, to society, to individual daily life — for the legal profession to continue treating technical literacy as optional. The lawyers who draft AI regulations need to understand AI. The lawyers who litigate software disputes need to understand software. The lawyers who advise technology companies need to understand technology.
My Approach: Bridging Law and Technology
I describe myself as an attorney and software developer — not because it’s a catchy tagline, but because it accurately reflects how I practice law. My technical background isn’t separate from my legal work; it’s integral to it.
I bring over fifteen years of software development experience and seven years of AI development work to every engagement. I’ve built the systems I now help clients regulate, protect, and commercialize. I understand technology not as an outside observer but as a practitioner who has lived the engineering challenges, the architectural tradeoffs, and the operational realities.
This technical background means I can spot issues that other lawyers miss. When I review TRAIGA’s definition of AI systems, I immediately recognize its overbreadth because I’ve built the systems it inadvertently captures. When I draft cybersecurity policies, I understand which controls actually reduce risk versus which merely satisfy compliance checklists. When I advise on data governance, I understand the technical architecture that determines whether data practices are actually achievable.
Understanding technology isn’t just about spotting problems — it’s about finding solutions. When regulations are ambiguous, technical knowledge helps me identify compliant architectures that achieve business objectives. When contracts need to address novel risks, technical understanding enables me to draft provisions that actually work in practice. When disputes arise, technical fluency lets me identify the facts that matter and present them persuasively.
My clients tell me that having a lawyer who actually understands their codebase and technical constraints is a game-changer. I don’t just identify legal issues — I work with development teams to find practical solutions that protect businesses while allowing them to innovate.
Technology Is Eating the Law
Every year, more legal matters involve technology. Data privacy regulations apply to virtually every business. Cybersecurity requirements continue to expand. AI governance is emerging as a new regulatory field. Software is embedded in products ranging from automobiles to medical devices to home appliances. Digital transformation has made technology central to operations across every industry.
This means that technical literacy is no longer relevant only for lawyers who specialize in technology law. Corporate lawyers need to understand data flows to draft competent privacy provisions. Employment lawyers need to understand algorithmic hiring tools to advise on discrimination risks. Intellectual property lawyers need to understand software architecture to evaluate patent claims. Litigation attorneys need to understand electronic evidence to conduct discovery effectively.
The days when lawyers could treat technology as someone else’s problem are over. The technology is everywhere, and lawyers who don’t understand it are increasingly unable to serve their clients effectively.
Choosing the Right Counsel
If you’re evaluating legal counsel for technology matters, here are the questions you should ask:
- Does the attorney have technical education or professional experience with the technologies involved in your matter? Academic credentials in law are necessary but not sufficient for technology practice. Look for attorneys who can demonstrate substantive technical knowledge, whether through formal education, professional experience, or demonstrated thought leadership.
- Can the attorney explain technical concepts clearly, without hand-waving or reliance on buzzwords? Anyone can say “artificial intelligence” or “blockchain” or “cloud computing.” Competent technology attorneys can explain what these terms actually mean, how the underlying technologies work, and why specific technical details matter to legal analysis.
- Has the attorney identified issues that other counsel missed? Technical literacy reveals problems that technically illiterate lawyers overlook. Ask for examples of how the attorney’s technical background changed outcomes or influenced strategy.
- Does the attorney engage with the technology community? Attorneys who understand technology typically participate in technical discussions, speak at industry conferences, and stay current with technological developments. This ongoing engagement ensures their technical knowledge remains relevant as technologies evolve.
- Can the attorney translate between technical and legal frameworks? The most valuable technology attorneys serve as bridges between technical and legal teams. They can explain legal requirements to engineers in terms engineers understand and explain technical constraints to executives in business terms.
Looking Forward
The legal profession’s attitude toward technology is slowly changing. Bar associations increasingly recognize technology competence as part of an attorney’s ethical obligations. Law schools are expanding technology offerings. More attorneys are seeking technical education alongside their legal training.
But the change is happening too slowly. The technology sector continues to outpace the legal profession’s ability to understand and regulate it effectively. Bad laws continue to be drafted by well-intentioned lawyers who lack the technical knowledge to foresee consequences. Inadequate contracts continue to fail clients whose attorneys couldn’t understand the technical risks at issue.
I’m not waiting for the legal profession to catch up. I’m delivering technology-literate legal representation today, for clients who understand that their business deserves an attorney who understands their technology.
Bring Technical Literacy Into Your Legal Strategy
Whether you’re navigating AI governance requirements like TRAIGA, structuring software development agreements, responding to cybersecurity incidents, or facing any other technology-related legal challenge, you need counsel who can engage with the technical substance of your matter.
Technology is too important to leave to lawyers who don’t understand it. If you’re ready to work with someone who bridges the gap between law and code, let’s talk.
Alex Shahrestani is an attorney and software developer who helps technology companies navigate complex legal challenges. He is the managing partner of Promise Legal PLLC, a technology-forward law firm in Austin, Texas. To discuss how his unique expertise can help your business, schedule a consultation.