Scaling AI with Storage Efficiency – with Leaders from Pure Storage, Generac, Lexmark, Comfort Systems USA, Danaher, Alcon, and More

This interview evaluation is sponsored by Pure Storage and was written, edited, and printed in alignment with our Emerj sponsored content guidelines. Learn extra about our thought management and content material creation providers on our Emerj Media Services page.

Data fragmentation stays a crucial problem for organizations worldwide, considerably impeding usability and obstructing digital transformation initiatives. Research underscores that fragmented knowledge programs not solely complicate entry and evaluation but additionally considerably drain sources and innovation capability.

A research from MIT highlights that knowledge fragmentation threatens the effectiveness of information linking and analytics, thereby undermining the power of organizations to generate complete insights and actionable intelligence from disparate knowledge sources.

A complementary UK public sector survey revealed that roughly 70% of organizations expertise poorly coordinated or non-interoperable knowledge, which limits their means to take care of a complete operational view and hinders their digital transformation efforts.

Emerj just lately hosted a particular collection of the ‘AI in Business’ podcast with enterprise executives to discover how organizations are managing and scaling AI infrastructure, optimizing knowledge storage, and enhancing storage effectivity to help superior AI workloads.

Executives featured within the collection embrace Shawn Rosemarin, Vice-President R&D in Customer Engineering at Pure Storage; Neil Bhandar, Chief Data Analytics Officer at Generac; Bryan Willett, Chief Information Security Officer at Lexmark; Amit Gupta, Chief Digital Officer at Danaher; Joe Lang, Vice President of Service Technology and Innovation at Comfort Systems; Norma Scagnoli, Chief Learning and Innovation Officer on the Illinois Institute of Technology; Greg Ratcliff, Chief Innovation Officer at Vertiv and Julian Tang, Chief Operations Officer for the Innovation Office at BlackRock.

During these conversations with Emerj Editorial Director Matthew DeMello, the leaders dived deep into the challenges of AI adoption, together with knowledge governance, infrastructure scaling, workforce collaboration, and moral deployment.

This article examines a number of key insights from their conversations for leaders aiming to scale AI successfully, optimize knowledge storage, and strengthen governance:

  • Optimizing energy utilization to develop: Assessing and centralizing current knowledge whereas evaluating power constraints to make sure AI initiatives stay possible, environment friendly, and aligned with enterprise worth.
  • Balancing flexibility with self-discipline: Ensuring cloud scalability delivers actual ROI by actively managing storage, cleansing up unused knowledge, and making value–elasticity tradeoffs earlier than investing in AI infrastructure.
  • Evaluating and controlling knowledge threat: Assessing your threat tolerance to find out whether or not delicate knowledge ought to stay on-device, on-premises, or in a hybrid setup.
  • Building a layered, decoupled knowledge basis: Structuring knowledge structure round aggregation, integration, transformation, and harnessing whereas utilizing a decoupled design to allow seamless integration.
  • Aligning groups to scale programs: Bringing key stakeholders from tutorial affairs to finance, IT, and scholar providers into the governance course of to make sure programs, workflows, and knowledge infrastructure can deal with large-scale digital packages.
  • Sequestering trusted knowledge: Keeping AI fashions educated solely on safe, internally generated knowledge reduces misinformation dangers, improves accuracy, and ensures extra dependable outcomes.
  • Deploying modular AI infrastructure: Building standardized, modular, and scalable knowledge middle models to shortly deploy native AI capabilities whereas leveraging the identical infrastructure as hyperscale programs.
  • Engaging stakeholders upfront: Bringing authorized, compliance, and InfoSec into AI initiatives from the beginning prevents governance delays and ensures smoother implementation.

Optimizing Power Usage to Grow

Episode: Scaling AI with Storage Efficiency – with Shawn Rosemarin of Pure Storage

Guest: Shawn Rosemarin, Vice-President R&D in Customer Engineering at Pure Storage 

Expertise: Customer Engineering, Data Intelligence, Analytics

Brief Recognition: With over 25 years of trade expertise, Shawn has held management positions at Hitachi Vantara, Dell, and IBM. In his present function at Pure Storage, he leads technique efforts with engineering groups and prospects.

Shawn explains that earlier than leaping into AI, organizations must take stock of their current knowledge. Over the years, companies have invested closely in digitizing info. Still, a lot of it stays fragmented, context-poor, and advanced for machines to interpret — for instance, medical doctors’ notes that may make sense to people however to not AI programs.

The first step, he says, is to grasp what knowledge exists, what’s usable, and the place it lives — usually unfold throughout dozens or a whole bunch of programs. From there, corporations should consolidate and centralize this knowledge to enhance accessibility and pace, making certain it’s near the programs that want it (a method he refers to as giving into “knowledge gravity”).

Finally, AI infrastructure selections should be grounded in enterprise practicality — the price of managing and utilizing knowledge ought to by no means outweigh the worth it delivers to finish customers.

He warns that power constraints have gotten the most important limiter for AI and knowledge development. He explains that enterprises, international locations, and even people could quickly face energy quotas as present power consumption — particularly from GPUs, which use 10 occasions extra energy than CPUs — dangers compromising public well-being.

New knowledge facilities are already being restricted in some areas to stop energy shortages throughout warmth waves. Organizations must assess their “bridge to operating out of energy” (e.g., 12, 24, 36 months) and proactively plan whether or not to spend money on different power sources, akin to nuclear, hydroelectric, or coal, or await improvements like nuclear fusion. Without this foresight, even extremely worthwhile AI initiatives may stall as a consequence of inadequate power.

“I’m assured that on the finish, once we take a look at what’s occurring, programs are getting extra environment friendly. The problem is that there’s lots of legacy infrastructure being put in place as we speak. There are lots of inefficient programs. There’s lots of legacy storage that has been deployed over many years, which is needlessly consuming energy that may very well be higher used elsewhere. I’ve been within the enterprise as nicely. I do know that typically kicking the rock down the street is a greater possibility than really doing a wholesale modernization, however should you don’t clear up the inspiration of your home, then your bridge to operating out of energy goes to be 12 months.”

– Shawn Rosemarin, Vice-President R&D in Customer Engineering at Pure Storage

Balancing Flexibility with Discipline

Episode: Why Immutable Snapshots Matter for Compliance and AI – with Neil Bhandar of Generac

Guest: Neil Bhandar, Chief Data Analytics Officer, Generac

Expertise: Artificial Intelligence, Machine Learning, Cloud Computing,

Brief Recognition: In his present function at Generac, Neil leads the event of the corporate’s knowledge technique and oversees the buildout of analytics platforms and capabilities throughout the Generac franchise. Previously, he has held roles at Procter & Gamble, JPMorgan Chase, Campbell’s, and Evanta (a Gartner firm), amongst others. Neil holds a grasp’s diploma in Industrial and Systems Engineering from Lehigh University.

Neil explains that many executives face challenges making AI funding selections as a result of they lack hands-on expertise with knowledge, GPUs, and compute. He factors out that AI infrastructure is in a deflationary cycle — what prices a specific amount as we speak might be cheaper and extra highly effective in a couple of months — creating hesitation round when to speculate. He additionally challenges the frequent perception that extra knowledge all the time results in higher outcomes.

“There are sure substitutable knowledge parts, that are protected courses of information that may very well be a proxy. And so, by that definition of being a proxy, they grew to become delicate knowledge parts. One actual instance of that is should you take a look at folks’s nation of start, it’s extremely correlated to their nation of undergraduate schooling. But your nation of undergraduate schooling will not be a protected class variable. Your nation of start is now contingent. So now you’ve bought to be delicate when you consider how you utilize sure knowledge simply due to that proxy affiliation.”

– Neil Bhandar, Chief Data Analytics Officer at Generac

He additionally explains that whereas the idea of storing and processing knowledge externally isn’t new — credit score businesses have been doing it for the reason that Sixties — the size of as we speak’s cloud use is way higher. People and companies now retailer the whole lot from monetary information to private images within the cloud, largely as a result of storage prices have fallen and connectivity has improved.

When deciding between cloud, on-premises, or hybrid setups, he urges leaders to guage two key components: value and elasticity. Cloud platforms provide scalability and comfort, permitting organizations to shortly increase capability throughout mergers or spikes in knowledge. However, this similar flexibility can turn out to be a hidden value if unused knowledge continues to sit down in storage.

Neil’s takeaway from the precedence framework he presents is that cloud adoption isn’t nearly flexibility — it requires self-discipline in managing and cleansing up knowledge to make sure that scalability doesn’t quietly flip into wasteful spending.

Evaluating and Controlling Data Risk

Episode: Building Storage Strategies That Scale with AI Workloads – with Bryan Willett of Lexmark

Guest: Bryan Willett, Chief Information Security Officer, Lexmark

Expertise: IT Security, Data Privacy, Internal Audit

Brief Recognition: Bryan labored with Lexmark for shut to a few many years. In his most up-to-date function as CISO, he oversaw all world IT safety, knowledge privateness, inner audit, and bodily safety for 140+ websites worldwide. He has additionally constructed Lexmark’s first-ever enterprise-wide IT safety and privateness threat program to drive transformational change throughout the enterprise.

Bryan emphasizes that robust AI governance depends upon shut collaboration between safety, privateness, and AI groups. He explains that when evaluating any new AI answer, organizations ought to first conduct an ethics evaluate (his workforce makes use of the EU AI Ethics Framework as a benchmark), adopted by a safety evaluate to evaluate knowledge circulation, safety, and entry controls.

The aim is to make sure knowledge confidentiality, integrity, and availability, whereas minimizing the publicity of delicate info and clarifying who has entry and is accountable for the info.

His key level: most corporations solely join safety and privateness, however excluding the AI workforce from governance is a mistake. All three should work collectively from the outset to make sure the moral and safe deployment of AI.

Bryan segues to debate how organizations have to be stringent about any knowledge taken from IoT units, particularly biometrics, and that this knowledge ought to reside in a safe enclave on the gadget and by no means go away. Individuals ought to determine whether or not they’re snug sharing delicate knowledge, whereas IoT distributors have the accountability to be clear and guarantee knowledge is suitable for the service.

“We know in life sciences, there’s going to be different delicate knowledge. When you get into that extra delicate knowledge, it nonetheless is smart to do the capital funding on-prem to retailer your knowledge. But you continue to can use a cloud service if it’s good to — that could be a mannequin. Like the whole lot, it’s a threat. The group has to make that call on what the danger tolerance is, and then they’ll determine whether it is one thing they’re going to do on-prem, or are they going to go for pace within the cloud.”

– Bryan Willett, Chief Information Security Officer at Lexmark

Building a Layered, Decoupled Data Foundation

Episode: Storage Strategies That Keep GenAI on Budget – with Amit Gupta of Danaher

Guest: Amit Gupta, Chief Digital Officer, Danaher 

Expertise: Digital Transformation, IT Strategy, AI

Brief Recognition: As Chief Digital and Information Officer at Danaher Life Sciences, Amit led digital integration throughout Danaher’s acquisition of Abcam, delivered $60M+ in AI-driven funnel development, and constructed world IT and digital platforms throughout a number of working corporations. He has over 25 years of expertise driving IT, AI, and digital transformation throughout the Life Sciences, Biotech, CPG, Pharmaceuticals, Medical Equipment, and Industrial Manufacturing sectors. He holds an MBA from the University of California, Berkeley’s Haas School, Wharton, and Nanyang Business School.

Amit explains that knowledge is the gasoline powering AI, and to make it efficient, organizations want a structured knowledge structure constructed on 4 layers.

He outlines these as:

  1. Data aggregation: Collecting knowledge from all sources.
  2. Data integration: Connecting programs like CRM and ERP.
  3. Data transformation: Cleaning, synthesizing, and making ready knowledge for AI.
  4. Data harnessing: Where AI applies insights to drive actual enterprise outcomes.

He provides that instruments like Salesforce Data Cloud and MDM platforms (like Tamr) assist set up a single supply of reality by organizing grasp knowledge (e.g., buyer info) earlier than layering transactional knowledge (e.g., gross sales historical past) on prime.

Amit then explains that in acquisition integrations, knowledge shouldn’t be the first focus — corporations should first align with the general transition technique, together with fundamentals akin to single sign-on, electronic mail programs, and cultural integration:

“This is the place that aggregation layer that I talked about earlier and the decoupled structure assist. Because as you purchase corporations, having that remoted or decoupled aggregation layer helps you combine these acquisitions to the precise knowledge sources. And you possibly can’t boil the ocean. You must have a prioritized plan for what knowledge varieties and what knowledge sources you wish to faucet and combine into. Again, hold your finish aim of the use case and the enterprise case affect in thoughts. So with any such initiative, it boils all the way down to 4 issues, you already know, all of us need, cheaper, higher, sooner, safer, which is value, high quality time, and compliance.”

– Amit Gupta, Chief Digital Officer at Danaher

Sequestering Trusted Data

Episode: How Data Ownership Drives Trustworthy AI Models – with Joe Lang of Comfort Systems

Guest: Joe Lang, Vice President of Service Technology and Innovation, Comfort Systems 

Expertise: Leadership, Innovation, Sales

Brief Recognition: Joe has been with Comfort Systems for practically twenty years. He has offered the corporate with service management to develop and develop the group whereas creating long-term, strategic targets and expectations for the company. He can be an advisory board member for Field Service USA, The Service Council, and Aquant.

Joe warns organizations towards overestimating what AI platforms can do for them. He says many groups make two key errors: first, over-cleaning or limiting their knowledge primarily based on storage prices, which might prohibit insights; and second, they assume that cloud or tech giants like AWS or Google will robotically “repair” their knowledge and ship ready-made outcomes.

He stresses that whereas AI is highly effective, it doesn’t take away the group’s accountability to handle, perceive, and apply its personal knowledge successfully. Success nonetheless depends upon human oversight and considerate knowledge preparation as a result of no AI mannequin as we speak can totally exchange that accountability.

Lang continues, advising leaders to deal with AI implementation like an R&D venture, not a quick-return funding. Organizations shouldn’t count on fast ROI — the actual worth comes after the info is organized, refined, and usable. The early phases require vital funding, iteration, and fine-tuning earlier than reaching some extent the place AI can have a significant affect on enterprise outcomes.

Joe additionally explains that his group has taken an all-cloud method, however designed its safety framework so the infrastructure selection doesn’t have an effect on security or efficiency.

“In the grand scheme of issues, sequester the info that you already know and belief that you just’ve generated as a company. It might not be 100%, however you’ll eradicate 30% that may really pollute your outcomes. So I believe sequestering your knowledge and having it multi functional place the place it’s simply accessible is an effective method, and it has labored nicely for us.”

– Joe Lang, Vice President of Service Technology and Innovation at Comfort Systems

Aligning Teams to Scale Systems

Episode: Data Pipelines that Support Globalized Education and Training Programs – with Norma Scagnoli of the Illinois Institute of Technology

Guest: Norma Scagnoli, Chief Learning and Innovation Officer, the Illinois Institute of Technology

Expertise: Instructional Design, E-learning Development

Brief Recognition:  In previous roles, Scagnoli has been Assistant Vice Chancellor of Enterprise Learning Innovation at Northeastern University and Research Associate Professor on the University of Illinois Urbana-Champaign. She holds a PhD in Human Resource Education from the University of Illinois Urbana-Champaign.

Norma highlights that efficient knowledge governance and scaling of academic packages hinge on tradition, collaboration, and operational readiness. She emphasizes that challenges in greater schooling are sometimes cultural somewhat than technical; longstanding programs, accreditation metrics, and rating pressures form establishments’ methods of considering, making them hesitant to undertake scalable, digital approaches. Overcoming these cultural obstacles is step one earlier than addressing infrastructure or content material creation.

Scaling knowledge and packages requires bringing all key stakeholders to the desk, together with finance, authorized, scholar affairs, tutorial affairs, analysis, and school, in addition to representatives of learners. These models kind the “central nervous system” of information governance, making certain that operational, regulatory, and learner-centric concerns are integrated.

For instance, increasing packages globally necessitates programs that may deal with numerous tuition strategies, switch approvals, and learner demographics, whereas sustaining accuracy and compliance.

Norma additionally underscores that educational programs have advanced. Faculties now not work in isolation however are supported by groups that modularize content material, adapt movies for various learner varieties, and coach instructors on readability and presentation. The modular method she describes permits packages to scale effectively, repurposing evergreen content material whereas enabling personalization for degree-seeking college students, company learners, or non-credit audiences.

Deploying Modular AI Infrastructure

Episode: Scaling GenAI Without Melting the Data Center – with Greg Ratcliff of Vertiv

Guest: Gregory Ratcliff, Chief Innovation Officer, Vertiv 

Expertise: Data Science, IoT, Cloud Computing

Brief Recognition: Gregory has over 30 years of expertise in main and managing know-how groups, creating and launching new merchandise and providers, and creating and executing knowledge and innovation methods. He has been a doctoral candidate at Liberty University, the place he researched Agile Project Management of IoT and Big Data initiatives.

In his dialog, Gregory attracts an analogy between knowledge infrastructure and meals distribution to elucidate tendencies in AI and knowledge storage. He says that simply as massive, centralized meals warehouses serve vital metro areas effectively, as we speak’s enterprise AI depends on large centralized knowledge facilities and providers for effectivity.

However, he factors out that there’s a rising want for smaller, native “markets” — modular, regional knowledge shops which are extremely related and provide low-latency entry.

These native knowledge hubs, offered by colocation or cloud suppliers, mirror massive knowledge middle providers however serve regional wants with low latency and proximity advantages, akin to native catastrophe restoration, signaling a shift towards distributed, regionally optimized AI infrastructure.

He then emphasizes a hybrid AI method, conserving delicate knowledge and particular AI capabilities in-house whereas leveraging exterior platforms for added capabilities.

He notes that particular delicate knowledge, akin to hospital MRIs, won’t ever be dealt with by massive exterior AI suppliers as a consequence of privateness and regulatory considerations, so some AI should stay native.

He explains that rising trade requirements enable environment friendly, lower-cost manufacturing and foster competitors. These constructing blocks, already utilized in massive hyperscale knowledge facilities, might be scaled down into smaller, modular programs, akin to a one-megawatt, container-sized knowledge middle funded by the Department of Energy’s ARPA-E program.

The idea is akin to a “playset” for knowledge facilities: you possibly can shortly deploy modular models wherever wanted — at a manufacturing facility, for high-security workloads, or for native AI processing — whereas using the identical standardized parts as large-scale knowledge facilities. Essentially, he predicts a future the place versatile, modular, and scalable knowledge middle constructing blocks help each hyperscale and native AI wants.

Engaging Stakeholders Upfront

Episode: Turning AI Ambition into Infrastructure Reality – with Julian Tang of BlackRock

Guest: Julian Tang, Chief Operations Officer for the Innovation Office, BlackRock 

Expertise: IT Strategy, IT Operations, Leadership

Brief Recognition: Julian oversees know-how scouting, accelerates AI and digital options, and develops patent technique at BlackRock. He has a Master’s Degree in Business Administration from USC Marshall School of Business.

Julian emphasizes the significance of involving all key stakeholders early within the implementation of AI, notably authorized, compliance, and Information Security (InfoSec) groups.

He explains that many organizations focus closely on AI’s dangers and usually wait too lengthy to incorporate these teams, which ends up in governance points and delays. When these groups are introduced in solely on the finish of a POC, it turns into messy — with groups scrambling to satisfy necessities retroactively or shoehorning it by the door, as Julian places it.

He says most established corporations have already got expertise with due diligence by “Know Your Third Party” (KY3P) processes — frameworks for assessing threat, authorized, and compliance when working with distributors. He means that they leverage the identical mindset for AI initiatives by bringing in authorized, threat, and safety groups early and clearly defining what the group is attempting to attain.

He emphasizes that an AI technique doesn’t have to be overly advanced — it ought to match on a single web page.

“Start to drag in all of those AI initiatives which are siloed. Get it right into a chart, a one-pager. If it’s taking a couple of web page, you’re in all probability taking a look at too many issues. As you evaluate these initiatives, start to align them. Are they core to our firm’s strategic targets? If not, then perhaps a few of these POCs ought to go on a pause and get all the way down to a handful of those that they’ll actually deal with. As we begin to consider the roadblocks that may be there, assign some house owners and begin eradicating these roadblocks. Get them to be actually clear about what they’re going to ship by way of their enterprise worth.”

– Julian Tang, Chief Operations Officer for the Innovation Office at BlackRock

Similar Posts