We've "been there, done that" on over $50 billion worth of technology.




Cybersecurity is one of the most value-impairing risks. Adverse security events can shrink client base, incur regulatory penalties, invite class-action lawsuits, and result in long-term brand tarnishment.
Cybersecurity Company Has Bad Security
Target was in the cybersecurity space, so client assumed target’s own security would be good. We found known actively-exploited vulnerabilities and no-longer-being-patched network sniffers deep in customers' server rooms, among other security violations. Client and target altered roadmap to prioritize security fixes and reallocated liability for breaches based on Tech DNA-identified vulnerable code and best practice failures.
SOC 2 Certification Misleading
Target was in ad-tech space as both a demand-side platform (DSP) and supply side platform (SSP). Target provided a recent SOC 2 Type 2 certification as evidence of being secure. But SOC 2 certifications are a poor indicator of actual cybersecurity and we found multiple attack vectors including: insufficient protection against SQL-injection attacks; access control bypass; hard-coded passwords; monitoring and alerting gaps; and spoof-able audit mechanisms. Moreover, we identified no recent pen testing, which SOC auditors sometimes consider only a recommendation rather than a requirements under SOC 2. Target agreed to pre-close pen test and to fix all issues found by Tech DNA and the pen test.
Attack Vector Against Famous People
Target was in movie production management space and had numerous high-profile directors and producers as clients in Hollywood, Bollywood, and elsewhere. Target had multiple cybersecurity problems: old but still in-market desktop versions with no protection against infected files or fuzzing attacks. Deal proceeded at lower valuation to accommodate lost revenue from client’s new requirement that target give latest (and secure) versions away to mitigate risk from unfixable older desktop versions.


Privacy penalties are now big enough – up to 4% of acquirer’s global revenue– that they can dwarf deal value. We assess both general privacy regulations like the GDPR and CCPA, as well as sectoral regulations like HIPAA, SOX and COPPA.
In EU But Didn't Know It -- Penalties Still Apply
Target built travel agency software for the US market. Or mostly for the US market. We found code in an older product serving a handful of agencies in Ireland that fed EU personal data back to the US. That feed triggered GDPR penalties calculated against all revenue, even non-EU revenue. The high cost of GDPR compliance tends to force an in-or-out approach: either generate enough EU revenues to justify high GDPR compliance costs, or exit the EU. Client chose to exit the EU and focus on North American market.
Business Model Blocked by GDPR
Target built “salary” software for two groups: for employers to know what to pay and for employees to know what their market value was. While both had privacy issues, only one affected the deal: the employee-facing software used “screen scraping” technology to find new potential job seekers and sell them “what’s your market worth?” services. Such screen scraping violates the GDPR, which target counsel ultimately conceded. Target agreed to end screen scraping in the EU and client excluded related revenue from their models and the deal proceeded as modified.
Med-tech Company Didn't Understand HIPAA
Target was in the medical diagnostic space. Their secret sauce was in machine learning insights related to skin disease and wound management. We found target’s claimed HIPAA compliance misplaced. Firstly, their HIPAA certification did the wrong analysis, which is common: the analysis was of the target as a “covered entity” (such as a hospital), and not a “business associate” (typically a SaaS business) which is what the target was, yielding false compliance comfort. Secondly, the certifiers and target both failed to distinguish between “Safe Harbor” approach to de-identification (such as “name” or “medical record numbers”) and “Expert Determination” approach to de-identification. The first applied to names and other text identifiers found in meta data and on name tags in images, while the second applied to the identifying body features of the images themselves. Client required target secure a legal opinion from HIPAA-experienced counsel which client then mapped against revenue to quantify how much revenue remained if limited to only HIPAA-compliant activities. (In addition to the HIPAA privacy issue identified here, we also found separate violations of HIPAA’s Security Rule).

    “One of the biggest benefits of working with Tech DNA is their ability to determine what the real focus should be. They just know how technology affects a business and exactly what to look for.  They understand the ‘So What?’ of technology.”


    “Tech DNA consultants simply have a more substantial technical and M&A experience than the competition. They’re just really clear and concise about a technology company’s strengths and weaknesses.”


    “Tech DNA is very technically deep, but they do it in a way that doesn’t over compensate for people that are not as technically deep. They bridge that gap between what engineers say and executives need to hear.”



Scale risk is the inability to meet customer demand. Or the increased errors, customer frustration, down time or security breaches that occur while trying. We evaluate scalability of both technology and teams.
Double Whammy: Not Just One Scale Limiter. Two of Them!
Target built software to manage oil and gas extraction. Like most B2C targets, the target had three scale axes: number of customers; number of end users per customers; total data and/or processing volume for those end users/customers. Target claimed its tech could scale 10x new customers in 3 years. But we found issues with two axes that undermined that claim. First, we found on-boarding code-customizations that required on average 6 months per new customer. Second, we found that while target was technically “in the cloud”, target wasn’t “in the cloud” in a way that would scale 10x. Client adjusted models to accommodate higher on-boarding headcount OpEx and extended the time to achieve cloud’s scale benefits.

OSS License

Open source software (OSS) license risk falls into two camps: a) target’s improper use of open source software prevents target from passing clean title of their software to acquirer, or, b) target’s distribution of OSS compels target to give away its valuable IP for free.
(OSS can also presents security risks. See Security Risk case studies.)
Franchise Model = Give It All Away ... For Free!
Target built hotel management software. Client had both company-owned and franchised hotels. We found open source software license terms that penalized distribution of the software to franchised locations; the penalty was disclosure of critical source code IP to anyone, including competitors. To dodge disclosure, we recommended client reorganize its rollout plans to delay distribution to franchise locations until target could purge all such “viral GPL” software, which client did.
No Get Out of Jail Free Card: You Can't Contract Around OSS Licenses
Target was in cybersecurity space and monitored customer networks with target-provided hardware that ran at customer’s location(s). Target knew its secret sauce algorithms were comingled with “viral” open source packet-sniffing software on those customer-provided machines. But Target didn’t think the IP-destroying licenses applied to them because target never “sold” the software to customers. And that software was never put on target-owned machines. The software/hardware combo was merely licensed as an appliance. So it wasn’t a “distribution” they convinced themselves. But that’s not the way open source licenses work and target’s two arguments didn’t save them: First, distributing hardware along with software doesn’t somehow alter the software license terms and conditions. And second, structuring the business relationship between target and its customers as cyber services rather than software sales also didn’t save them. With target’s ownership of their critical IP in question, and along with other security issues we found, client walked away.


Machine learning / artificial intelligence (ML/AI) risk isn’t often a risk per se. More accurately, the issue is overinflated valuations based on dubious ML/AI claims. The exception is regulated industries, such as health care and credit scoring, where incorrect algorithms can cause bodily or financial harm, which in turn can result significant negative reputation, regulatory interest or liability.

Note: We are often asked to do a ML/AI “sniff test”.  With so much ML/AI puffery out there, clients often want a gut check of ML/AI claims up front before incurring any additional deal costs.  #1 and #2 are sniff test case studies.

Not Really ML and Not Really That Good
Target was in the litigation e-discovery space, parsing SEC and other rigid-form documents. We were asked for an initial “is it real?” machine learning / artificial intelligence (ML/AI) assessment. We found two serious problems: a) some of what they we’re calling “machine learning” was actually heuristic models and sophisticated rules engines; these can sometimes be applied to similar problem sets as machine learning, but are not machine learning as claimed, and b) the ML/AI they did have was typical for, say, an undergraduate student and riddled with bias and overfitting errors the team didn’t realize were there. Client pulled the plug and walked away.
Basic But Solid: Right Tools, Right Models
Target was in robotic process automation (RPA) space and was using ML/AI to watch employee tasks and then intelligently mimic. We found the ML/AI to be real but average for, say, someone with 2-3 professional years’ experience. Target used the right tools, understood which data features mattered most, and built privacy into training sets from the start. But target was using simplistic models and lacked accuracy feedback mechanisms. Client proceeded to next stage of due diligence (and ultimately concluded the deal after further due diligence).
Don't Be Fooled By Lots of PhDs
Target was in end-point security space using artificial intelligence to block intrusions by finding anomalous device behavior. Target had recently signed long-term services contract with an “ML company” with a dozen PhD’s who claimed sophisticated ML expertise. But the claims were untrue and despite the high number of PhD’s in math and statistics, not one had more than a few month’s of self-taught experience in ML/AI. Client proceed with the deal with target terminating the “ML company” contract and agreeing to build new and real ML/AI capabilities in-house within 12 months.
parallax background


Commoditization risk is the risk that the value and competitiveness of target’s tech has been eroded by newer, cheaper technologies.
Trying to Compete With Free
Target in retail space had built state-of-the-art big data pipeline in 2014 to analyze the shopping behavior of its clients’ customers. Unbeknownst to client, some tech areas have largely commoditized recently and sure enough we found target’s pipeline had been rendered non-competitive by Amazon and Google’s vastly superior data pipeline tech. But the low-value tech was hidden behind product licensing requirements that bundled the no-longer-relevant pipeline with still-market-leading retail management software. Client adjusted valuation to correctly treat target’s pipeline tech as maintenance drag rather than competitive benefit.


Outsourcing comes with several risks: lack of control / ownership of source code, contested IP ownership, impaired ability to retain key talent, and general conflicts of interest between what’s best for target vs what’s best for dev shop.
Turns out, they didn't write or control much of their own code
Target made accounting software. In reviewing the source code, we noticed roughly 40% of the code was missing. Turns out that 40% was not only written by external developers in the UK, but target didn’t even have access to the source. By contract they were entitled to it, so we eventually got source code. Client adjusted deal to recognize outsourced IP and changed roadmap to remove external dependency within 24 months. At client’s request, we identified which outsourced employees would be most valuable to hire directly.
Can't Pass Clean Title: Dev Shop Claims Code Is Theirs
Target was in aeronautics space and was upfront that development was partially outsourced. Outsourced dev shop resisted handing over code paid for and owned by target because dev shop said some of that code was now dev shop code. And this is common: dev shops naturally specialize over time by taking code written for one client and using it to provide similar custom work to new clients at below-starting-from-scratch prices. After a while, dev shops start to see code used by more than one client as their own secret sauce, even if target had originally paid for it. Which is exactly what happened here. Client then asked us to detail the conflicting IP claims between target and dev shop and we agreed that some code was dev shop code requiring a license. Client adjusted transaction to accommodate new licensing payments until target could implement their own version.
parallax background

Obsolescence risk

Obsolescence risk is that target’s code is at the end of its lifecycle and there are significant near-term costs just to not falling behind.
Code So Old It's Time To Start Over (at great expense)
Target was in the telematics / fleet tracking space. Target’s code was first developed in the 2000’s and had kept pace with modern competitors (albeit at higher cost), but cracks were beginning to show. Our task was to two-fold: first, give realistic assessment of target’s ability to not just “migrate” to the cloud but re-architect in the cloud to actually capture cloud’s benefits and ROI; and, second, assess if the current code would could remain competitive (or at least have a tolerable cost profile) until re-architecture-to-cloud was complete.