As chief growth officer at technology services contractor Dark Wolf Solutions, Oliver Sadorra channels his engineering background and technical expertise into a role directing the company’s business development, capture and proposal. Therefore, he’s just as concerned with assuring the organization along a path of consistent market growth as he is breaking new ground in research and development. The latter has included work in artificial intelligence, automating software, cybersecurity and more.
Sadorra has been with Dark Wolf for nearly a decade and before that, spent a decade with SRA, a CSRA company now owned by General Dynamics Information Technology. He knows the government contracting landscape well and is passionate about industry events and engagement.
He delved into some of these passions while also outlining the targeted growth trajectory of both the company and AI in a recent Spotlight interview with ExecutiveBiz.
ExecutiveBiz: What makes Dark Wolf a place where the best and brightest want to work?
Oliver Sadorra: As a technical services and solutions firm within GovCon, it is an extremely competitive market to attract top talent. I think many in industry will agree that GovCon firms are not only competing for talent with each other but now find themselves competing with some of the top commercial tech firms as well as the abundance of tech startups given the proliferation of remote work.
That said, we have an amazing workforce at Dark Wolf, with some of the most skilled professionals that I’ve had the honor to work with. Our employees are not only thought leaders in industry — leading exhibits, workshops and panels at premiere events such as Black Hat and XPONENTIAL — but also adept practitioners, earning top-place finishes at prestigious Capture the Flag and hackathon competitions, including DEF CON. Our ability to attract and retain this best and brightest talent ultimately comes down to our corporate culture, which differentiates us from our peers in industry. Since our beginnings, we have cultivated this culture to reflect our five core values:
- Customer first – We execute under a customer first ethos, which means that, above all, we focus on doing what is in the best interest of our customers. Everything, including our short-term financial economics, comes after customer delivery.
- Outcomes matter – Our customers entrust us to address some of the most complex technical challenges that they face, often tied to national security priorities. As such, we deliver solutions and services that focus on our customers’ most pressing needs.
- Pack life – We foster an environment that enables collaboration and working together as a pack, not as lone wolves. This includes a heavy focus on coaching and mentorship where every employee, to include the highest levels of leadership, is assigned a career coach from day one of employment.
- Diabolically creative – We reward creativity, celebrate the best ideas based on merit (not seniority), and accept failures as iterative steps to success. We also pair our technologists with subject matter experts with the end goal of forging forward-leaning solutions that truly address our customers’ hardest challenges.
- Embrace evolution – We take to heart the rapid evolution of technology and the need to prepare our workforce to adapt these new and emerging technologies to address customers’ needs. We leverage lunch and learns, internal hackathons, our certification bounty program and vendor partnerships to understand the latest technologies and trends.
We’ve found these values and our culture create an environment where the best and brightest thrive, and we celebrate that with promotion days, company-wide bell ringing ceremonies for contract wins and extensions and local events for our people in the summer and winter holiday season.
ExecutiveBiz: Where are you seeing opportunities for expansion in Dark Wolf’s portfolio? What new capabilities or markets are you eyeing?
Oliver Sadorra: When I look at Dark Wolf’s future growth, I see the need to adapt to not only the evolving technological ecosystem but also the dynamic geopolitical environment. From a technology standpoint, we’ll continue to refine and expand our capabilities within our three practice areas: cybersecurity, software and DevOps, and digital transformation.
We have, for instance, already seen recent expansion in our cybersecurity practice in areas such as zero trust implementation and penetration testing of AI systems. Additionally, our initially niche capability supporting cybersecurity assessment of commercial drones has grown substantially, and we fully expect to see that broadening to other unmanned as well as autonomous systems.
Within our software and DevOps practice, we’re seeing, as the next progression from DevSecOps, an increased focus on data, to include data engineering, DataOps and integration of artificial intelligence and other data analytic technologies.
Lastly, our digital transformation practice continues to build and refine our capabilities around supply chain risk management, technical due diligence and operational test and evaluation. As we continue to expand our capabilities, we also look to grow our portfolio with continued prioritization within the national security sector. We have already seen the shifts in national security priorities with recent global conflicts and activity, which has led to not only a shift in geographic focus areas but also increased attention to the space and cyber domains. As a firm, we have considerable capabilities to support these emerging priorities.
ExecutiveBiz: Generative AI has been the source of a major AI boom in the last year, but some of these tools are also proving to be risky. How do you think cybersecurity will have to evolve to stay ahead of potential threats posed by AI tools?
Oliver Sadorra: We should really look at assessing the risk of generative AI as well as other advanced AI capabilities from two perspectives: 1. the risks in adopting and using these capabilities within our environments and 2. the risks posed by those using these capabilities for nefarious purposes. From the adoption standpoint, we have already heard the stories about proprietary data unintentionally being released. When we perform cybersecurity assessments, whether as a red team or blue team or in support of the assessment and authorization, or A&A, process, one of the first things that we look at are the policies within an organization. Many organizations have been slow to put the proper policies in place during this AI boom. As a result, usage of these tools often goes unrestricted and unmonitored. Many organizations, including our federal partners, have since adopted policies to address these AI risks, and we recommend that those who haven’t do so.
Also, as we adopt these technologies, we should look at using the Risk Management Framework, or RMF, as a guide to understand, prioritize and address (or accept) risks. The National Institute of Standards and Technology has released AI RMF 1.0 (NIST AI 100-1) and the companion AI RMF Playbook to highlight the unique qualities of AI systems compared to more traditional IT systems. This guidance identifies areas such as bias and fairness, transparency and explainability, data privacy and security, performance and accuracy, and ethical considerations for evaluating the trustworthiness of AI systems and provides recommendations to support the AI RMF core functions: govern, map, measure and manage. In conjunction with traditional RMF controls and strategies, organizations can leverage this guidance to more effectively secure and protect these AI systems.
Looking at the nefarious use of these technologies, especially with their low barrier of entry, bad actors have been increasingly using generative AI technologies to support malicious activities. It has been reported that generative AI has been used for social engineering — more sophisticated phishing emails, fake identities, voice cloning and disinformation campaigns — as well as other activities such as malware generation and the rapid identification and exploitation of attack vectors. That said, many of these AI technologies can also support the response to these threats. Anomaly detection, behavioral analytics and similar AI-driven capabilities, for instance, can and should be used to support continuous monitoring and threat detection and response. AI can also help to support dynamic threat analysis, continuously evaluating collected threat intelligence so organizations can adapt more rapidly and effectively.
Finally, while cybersecurity training has become a must for organizations to require at least annually, training specific to these AI threats should be integrated as part of that. Educating staff about risks and best practices for use of AI as well as raising awareness of potential risks and ways to identify use of these advanced AI capabilities is now an imperative.
ExecutiveBiz: Where do you think AI is headed next?
Oliver Sadorra: This is definitely a tough question to answer as I think there are a lot of directions where AI is going. We’re, for instance, already seeing the rapid evolution of these generative AI capabilities, and with that, society is discovering new ways to adapt them. Because of the democratization that they have enabled, we’ll continue to see these technologies augment different fields and hopefully improve the quality of life across society.
Another area where we’ll likely see a significant evolution is autonomy. Autonomous systems will become more capable, especially as we see better integration and optimization with hardware. Unrelated to the technology itself, we’re already seeing it, but there will be an increased focus on ethical AI and the maturity of AI governance. Governments across the globe are now recognizing the benefits and risks of AI and adapting policies accordingly.
The last thing that I want to highlight is the convergence of AI with other advanced technologies such as quantum computing. It’s natural for such technologies to converge and support each other to address increasingly complex problems.