Home Blog Page 17

Miro Announces Asia Hub in Singapore to Accelerate Growth Across the Region and Bring AI Collaboration to New Markets

0
Miro Announces Asia Hub in Singapore to Accelerate Growth Across the Region and Bring AI Collaboration to New Markets

Miro Logo

AI Innovation Workspace perfectly placed to help organisations maximise AI investment and accelerate innovation

Miro®, the AI Innovation Workspace for teams, announced plans to expand its operations in Asia, supporting organisations across the region in their AI transformation journey. Miro is investing in people, resources, and infrastructure as it targets growth in key markets, including Singapore, India, South Korea, and other Southeast Asia countries.

“The opportunity to grow our customer base across Asia is significant. Our investment in Singapore is part of a lasting commitment to customers, partners, and our wider ecosystem across the region.” Brigid Archibald, Head of JAPAC at Miro.

As the global innovation centre of gravity shifts toward Asia – where R&D spending reached 45% of global investment in 2024 – the organisations leading this charge need tools and platforms built for the complexity and pace of modern innovation and collaboration. Miro’s AI-powered innovation workspace is uniquely positioned to support this moment. Miro gives organisations the shared context layer they need to move from insight to execution faster than ever before. For Asia’s most ambitious innovators, where speed-to-market and cross-border collaboration are existential priorities, Miro provides the link between human creativity and AI capability.

At the heart of Miro’s expansion strategy is a new Asia hub located in Singapore. This hub will serve both the Singapore domestic market and provide a launchpad into neighbouring countries. The move strengthens Miro’s ability to support its existing customers, reach new customers, and continue to build an ecosystem with regional partners, including AWS, Vsaas Global, GoPomelo, Altudo, and others.

Marketing Technology News: MarTech Interview With Fredrik Skantze, CEO and Co-founder of Funnel

“Singapore is a natural choice as a location to base our Asia operations,” said Sunil Pamnani, Head of Asia Sales at Miro. “This is a place where organisations, and government institutions alike understand the need for transformation – not just adoption. They value long-term thinking, disciplined execution, and technology that delivers real outcomes. That mindset is exactly what’s needed to reimagine how teams and AI work together.”

“The opportunity to grow our customer base across Asia is significant,” said Brigid Archibald, Head of JAPAC at Miro. “Our investment in Singapore is part of a lasting commitment to customers, partners, and our wider ecosystem across the region. Organisations are at a critical moment where they need to deliver on their AI investments and move from experimentation to integration. Miro is helping leaders to achieve this.”

Globally, Miro has 100M+ users and more than 250,000 customers. A significant number of these customers are based in countries across the region – and they are already using Miro and realising the benefits of embedding Miro into their workflows and critical operations. These include TCS Pace (operated by Tata Consultancy Services) and Frasers Property, which are using Miro to reduce time to market for product development lifecycle, improve the quality of ideas, and redefine their innovation processes.

Marketing Technology News: The Death of Third-Party Cookies Was Just the Start. Are You Ready for Consent Orchestration?

“With Miro AI, we can use intelligent prompts to challenge assumptions, test ideas, and explore new perspectives,” said Subin Pillai, Product Manager and Studio Lead at TCS Pace. “Miro Sidekicks acts like any other team member, helping validate use cases, suggest improvements, and simulate real-world scenarios. I could prompt it to take on different personas, to challenge our assumptions, to offer perspectives that broke through our mental debt. Suddenly, we weren’t just facilitating a workshop. We were orchestrating a symphony of human and artificial intelligence. The impact is 50% faster innovation cycles with working prototypes in 90 minutes.”

“Miro has saved us time, reduced costs, and made innovation more accessible,” said Iris Tan, Senior Manager, Strategic Innovation at Frasers Property. “Our senior leaders and global participants now use it to structure ideas and drive strategic decisions faster than ever before. We’ve moved away from simply building spaces to truly understanding what our tenants and their customers need. Design thinking is the foundation of that shift, and Miro allows us to embed it across our entire organisation.”

Write in to psen@itechseries.com to learn more about our exclusive editorial packages and programs.

Tells.co Launches AI Video Messaging Platform with RCS Business Messaging for Personalized Video at Scale

0
Kaltura and Descript Partner to Drive AI-Powered Video Innovation in the Enterprise, Enabling Scalable Content Production Across Regulated Industries

Tells.co Logo

AI-powered platform delivers personalized video through RCS rich messaging

Tells.co announced the launch of its AI video messaging platform, combining AI-generated personalized video with RCS Business Messaging to deliver custom video content directly to consumers’ native messaging apps. The platform represents a first-of-its-kind integration of AI video generation and RCS rich messaging at enterprise scale.

We’re combining AI-generated personalized video with RCS Business Messaging to create the most compelling customer communication channel that exists.”

— David Schlaegel, Co-Founder, Tells

AI Video Meets RCS Business Messaging

The Tells.co platform uses artificial intelligence to generate unique, personalized videos for each recipient on a campaign list. These AI videos are then delivered via RCS Business Messaging with inline playback — meaning recipients watch personalized video content directly in their messaging app without clicking links, downloading apps, or leaving the conversation.

Marketing Technology News: MarTech Interview with Lee McCance, Chief Product Officer @ Adverity

“We built this because the future of business messaging isn’t text — it’s personalized AI video delivered through RCS,” said David Schlaegel, CEO of Tells.co. “Every recipient gets a video made specifically for them, playing right in their messages with our verified sender profile. Nothing else on the market combines AI video personalization with RCS delivery at this scale.”

How AI Video Personalization Works

The AI video engine processes customer data — names, addresses, vehicle information, appointment history, property details — and generates a completely unique video for every individual recipient. Each AI-generated video features natural voice synthesis, dynamic visuals tailored to the recipient’s data, and personalized storylines that speak directly to the viewer’s situation.

The platform renders thousands of personalized AI videos in minutes, enabling campaigns of 10,000+ recipients where every video is unique. Combined with RCS verified sender profiles displaying brand logos and verification badges, the result is a trusted, high-engagement messaging experience.

Marketing Technology News: What is a Full Stack Marketer; What MarTech Matters Most to Full Stack Marketers?

RCS Video Driving Results Across Industries

Tells.co is deploying AI video through RCS across multiple verticals including real estate, automotive, and healthcare. Use cases include personalized home appraisal videos that reference specific property data and neighborhood sales, service reminder videos for auto dealerships featuring individual vehicle details, and healthcare follow-up videos tailored to patient treatment history.

Early campaigns combining AI video with RCS delivery are showing conversion rates significantly above traditional SMS and email benchmarks, driven by the combination of personalized video content, inline RCS playback, and verified brand trust signals.

First US Platform Approved for RCS Business Messaging

Tells.co is among the first platforms in the United States approved for RCS Business Messaging, building RCS capabilities into its core infrastructure rather than adding them as a supplementary feature. The company’s AI video messaging solution leverages this native RCS integration to deliver rich video experiences at scale with full analytics — including view rates, watch duration, and CTA engagement tracking.

Write in to psen@itechseries.com to learn more about our exclusive editorial packages and programs.

Cisco Secure AI Factory with NVIDIA Makes AI Easier to Deploy and Secure, Anywhere Organizations Need It Cisco logo

0
Cisco Secure AI Factory with NVIDIA Makes AI Easier to Deploy and Secure, Anywhere Organizations Need It Cisco logo

Expanded architecture lets businesses run AI at scale, from central data centers to the factory floor, without sacrificing performance or security

  • Cisco expands its Secure AI Factory with NVIDIA to work not just in large data centers, but at local edge sites where real-time decisions can’t wait, from hospitals and warehouses to moving vehicles.

  • Cisco is the premier partner to deliver partner-developed systems featuring NVIDIA Spectrum-X switch silicon paired with a Cisco operating system, providing customers the flexibility of leveraging both NVIDIA Cloud Partner-compliant reference architectures and Cisco Silicon One-based architectures.

  • Cisco adds deeper security capabilities to its reference architecture by extending Hybrid Mesh Firewall policy enforcement to NVIDIA BlueField DPUs and integrating Cisco AI Defense to secure multi-agent systems.

  • Cisco AI Defense will support and secure NVIDIA’s new open agent development platform, OpenShell, adding controls and guardrails to govern agent and claw actions.

Cisco announced a major expansion of its Secure AI Factory with NVIDIA, giving customers a framework for deploying AI across their entire infrastructure – from central data center to local sites where data is created and decisions are made.  Enterprises, neoclouds, sovereign clouds, and service providers can now move AI from pilot to full-scale production without stitching together disconnected systems, compressing deployment timelines from months to weeks and embedding security from the start.

“Most organizations understand the potential for AI to transform their businesses, but they’re navigating how to deploy the technology safely and at scale,” said Chuck Robbins, Chair and CEO, Cisco. “In partnership with NVIDIA, we’re solving that challenge with an architecture that sets a new standard for performance – making it simpler to deploy, operate, and secure AI infrastructure.”

“AI factories are transforming every industry, and security must be built into every layer—from silicon to software—to protect data, applications, and infrastructure,” said Jensen Huang, founder and CEO of NVIDIA. “Together, NVIDIA and Cisco are building the secure foundation for AI infrastructure—core to edge—so companies can scale intelligence with confidence.”

Marketing Technology News: MarTech Interview with Lee McCance, Chief Product Officer @ Adverity

AI That Runs Everywhere, Not Just in the Data Center

AI inference happens where data lives and decisions can’t wait, whether on the hospital floor or for analyzing video of a factory floor in real-time to keep workers safe. This reality fundamentally reshapes infrastructure by requiring inference workloads to operate locally — closer to the data, the devices, and the moment a decision must be made. Cisco and NVIDIA are enabling organizations to support edge inferencing use cases by:

  • Transforming the Enterprise Edge: Now supporting NVIDIA RTX PRO 4500 Blackwell Server Edition GPUs across the Cisco UCS and Cisco Unified Edge portfolios, Cisco enables enterprises to run mission-critical AI workloads at the edge without the energy cost and footprint of data center-scale hardware.
  • Transforming the Service Provider Edge: Today Cisco announces the Cisco AI Grid with NVIDIA reference design that combines the power of Cisco’s Mobility Services Platform with NVIDIA RTX PRO Blackwell Series GPUs. This enables service providers to leverage their existing networks to offer managed services for edge AI applications with carrier-grade reliability and sovereignty.

Driving Performance and Efficiency for Massive-Scale AI Factories

Building on the momentum of the recently launched systems powered by Cisco Silicon One G300 for scale-out and P200 for scale-across, Cisco continues to raise the performance ceiling while making the whole process faster and simpler.

  • Next-Generation Performance: Cisco’s latest high-speed switches power the most demanding AI workloads, including a new 102.4Tbps Cisco N9100 powered by NVIDIA Spectrum-6 Ethernet switch silicon. This joins the now generally available 800G N9100 powered by NVIDIA Spectrum-4 Ethernet switch silicon.
  • Rapid Deployment: Cisco Nexus Hyperfabric, now a part of Cisco Nexus One, will support Cisco N9000 Series switches, including the N9100 Series powered by NVIDIA Spectrum-X Ethernet silicon. Now organizations can transform a complex, multi-vendor integration puzzle into a simple, full-stack solution to cut deployment times and reduce the burden on IT.

Customers building large AI factories now have two validated paths to choose from: an AI factory based on a reference architecture compliant with the NVIDIA Cloud Partner (NCP) program, and a Cisco Cloud Reference Architecture built on Cisco Silicon One that adheres to the same design tenets.

Security Fused into Every Layer

In an era where AI models are high-value assets and agents are more autonomous, taking actions, making decisions and interacting with other agents – security can’t be an afterthought. Cisco is embedding protection into the fabric of the Secure AI Factory with NVIDIA to safeguard against both external threats and rogue agent behavior, including:

  • Securing AI infrastructure: AI is only as safe as the hardware running it – and attackers know it. Cisco Hybrid Mesh Firewall delivers consistent security policies across a diverse set of enforcement points: network switches, workload agents, and more. Greater coverage means fewer gaps for attackers to exploit. Today, Cisco is extending the Cisco Hybrid Mesh Firewall solution to enable policy enforcement on NVIDIA BlueField data processing units (DPUs) embedded in NVIDIA GPU servers connected to Cisco Nexus One fabrics. Threats are blocked at the server level before they ever reach an organization’s data.  The result: AI workloads that can be protected from the inside out, with zero performance trade-off.
  • Securing AI agents: Cisco AI Defense delivers model security, automated vulnerability testing, and now purpose-built guardrails for AI agents at the edge through integration with NVIDIA NeMo Guardrails, a part of NVIDIA AI Enterprise software. This helps AI developers and security teams stay ahead of emerging threats and maintain trust in AI. AI deployments are becoming increasingly distributed, with agents at edge locations often interacting with those at the core to accomplish tasks and execute workflows. AI Defense, as a part of the Cisco Secure AI Factory with NVIDIA, now extends to securing those agent-to-agent interactions.

Marketing Technology News: What is a Full Stack Marketer; What MarTech Matters Most to Full Stack Marketers?

Cisco Secures Enterprise AI Agent Development

Building on Cisco’s commitment to fuse security into all layers of AI infrastructure, as well as the agentic workforce, Cisco also announced today that Cisco AI Defense will support and secure NVIDIA’s OpenShell runtimes – part of the NVIDIA Agent Toolkit – adding controls and guardrails to govern agent and claw actions. By continuously monitoring and validating every tool and action an agent performs, Cisco AI Defense ensures that enterprises can confidently deploy AI agents to manage critical workflows without compromising security. This integration bridges the gap between innovation and risk, allowing organizations to trust their autonomous systems to operate reliably and securely.

Industry Reactions:
“As a leader in high-performance computing solutions, Cirrascale is thrilled by the introduction of new NVIDIA Spectrum-6 based Cisco’s N9100 series switches, extending Cisco’s NCP reference architecture-compliant portfolio with an impressive 102.4T capacity and a unified management plane through Nexus One. These innovations, combined with the flexibility of NX-OS and SONiC, enable us to scale our AI infrastructure seamlessly while maintaining operational simplicity. The availability of the 51.2T Spectrum-4 switch further enhances our ability to deliver cutting-edge AI solutions to our clients with unmatched performance and reliability.”
– Alex Nataros, CTO, Cirrascale Cloud Services

“Sharon AI looks forward to the Cisco’s N9100 series switches, offering 102.4T capacity with Nexus One’s cloud-managed Nexus Hyperfabric. With NCP RA compliance and the 51.2T Spectrum-4 based N9100 switch availability, we will be scaling our AI infrastructure with robust performance and efficiency. The G300 Silicon One-based N9300 switches provide the flexibility to meet evolving customer needs. Turnkey AI infrastructure deployment through Nexus One significantly simplifies operations and accelerates time-to-value for our initiatives.”
– Andrew Leece, COO and founder, Sharon AI

“World Wide Technology’s clients trust Cisco for enterprise networking. Their robust AI networking portfolio extends that trust to AI workloads. Cisco’s portfolio offers choice and flexibility to clients to build tailored AI infrastructure using Cisco Silicon One and NVIDIA Spectrum-X Ethernet switch silicon based switches with stellar performance up to 102.4Tbps running NX-OS or SONiC and unified by the Nexus One management plane. We’re excited about these advancements to deliver the scalability and performance required for the agentic era.”
– Jeff Fonke, Practice Director – Global Solutions & Architecture, World Wide Technology

“As organizations move beyond the experimentation phase of AI, the primary challenge has shifted from ‘what can AI do’ to ‘how do we operationalize it securely at scale.’ The industry is at a critical inflection point where AI workloads — specifically real-time inferencing —must move closer to the data at the edge without creating new security or infrastructure silos. The partnership between Cisco and NVIDIA is designed to offer customers the flexibility and choice they need to scale while helping them overcome complex integration challenges.”
 – Mary Johnston Turner, Global Lead, Digital and Datacenter Infrastructure and Services, IDC

Write in to psen@itechseries.com to learn more about our exclusive editorial packages and programs.

IBM Announces Expanded Collaboration with NVIDIA to Advance AI for the Enterprise

0
IBM Announces Expanded Collaboration with NVIDIA to Advance AI for the Enterprise

Advancements across GPU-native data analytics, unstructured data extraction, on-premises and cloud infrastructure, Nestlé global supply chain decision speed, and consulting to mobilize enterprise AI at scale

IBM announced at GTC 2026 an expanded collaboration with NVIDIA to help enterprises operationalize AI at scale. Advancing efforts across GPU-native data analytics, intelligent document processing, on-premises and regulated infrastructure deployments, cloud, and consulting, the collaboration aims to give enterprises the data foundation, infrastructure, and expertise to move AI from pilot to production.

Enterprises are making significant investments in AI, but too many remain stuck between experimentation and production at scale. The barriers are consistent: data is fragmented and difficult to access; infrastructure wasn’t built for advanced AI workloads; AI deployments don’t support the compliance and residency requirements of regulated industries; and many organizations still need the guided expertise to implement and deploy the technologies.  Today’s announcements from IBM and NVIDIA are designed to close these gaps.

“In the next wave of enterprise AI, the model layer will rely on the data, infrastructure, and orchestration layers – and on businesses that can bring all three together,” said Arvind Krishna, Chairman and CEO, IBM. “Our partnership with NVIDIA goes to the heart of that challenge. Together, we’re giving enterprises the solutions they need to stop experimenting with AI and start running on it.”

“IBM pioneered enterprise computing and data processing six decades ago — and today they are redefining it for the AI era,” said Jensen Huang, founder and CEO of NVIDIA. “Data is the ground truth that gives AI context and meaning. Together with IBM, we are bringing CUDA GPU acceleration directly into the data layer — turning analytics and document processing from bottlenecks into real-time intelligence engines.”

Marketing Technology News: MarTech Interview with Miguel Lopes, CPO @ TrafficGuard

Accelerating Structured Data Analytics with GPU-Native Computing
IBM and NVIDIA are collaborating on an open-source integration to increase performance and reduce costs around how enterprises extract intelligence from their massive datasets. IBM watsonx.data’s SQL engine Presto is accelerated by NVIDIA cuDF to enable faster query execution on large datasets.

To validate in production, IBM and NVIDIA applied GPU-accelerated watsonx.data to Nestlé’s Order-to-Cash data mart. The data mart tracks every order, fulfillment, delivery, and invoice across 186 countries and processes terabytes across 44 tables. Nestlé was ideal for this proof of concept because of its strong digital backbone. With globally unified data models, a consolidated data foundation, and a single source of truth across markets, Nestlé already had timely, accurate, and trusted data at scale — the right foundation to put GPU-accelerated analytics to the test in a real production environment.

On CPUs, a single refresh previously took Nestlé 15 minutes and only ran a handful of times a day. Nestlé reports that with NVIDIA’s software and GPUs, the IBM watsonx.data Presto engine reduced query runtime down to three minutes – achieving 83% cost savings and an overall 30X price-performance improvement.

“For a company that serves billions, data underpins decision making across our global operations,” said Chris Wright, Chief Information and Digital Officer of Nestlé. “Working with IBM and NVIDIA, a targeted proof of concept has demonstrated the ability to refresh global operations data in a few minutes and at reduced cost. Our focus now is on turning this capability into tangible business impact — further improving decision speed in areas such as manufacturing and warehousing, and scaling these capabilities across our enterprise.”

Helping Enterprises Unlock the Full Value of Their Data
Most enterprises aren’t lacking data. But often, they’re unable to access and use it. SharePoint sites, CMS systems, vendor research, SME knowledge: the information exists but it is trapped in unstructured, multi-modal formats that are difficult to extract, standardize, and trust at decision speed.

IBM and NVIDIA are addressing this with Docling from IBM and NVIDIA Nemotron open models – a combination designed to make intelligent document extraction available at enterprise scale. Docling standardizes and converts documents into AI-ready formats with source-level traceability, while NVIDIA Nemotron models accelerate ingestion of multi-modal content. Early results show significantly higher throughput compared to other open-source models, while maintaining or improving accuracy wherever GPU-accelerated infrastructure is available.

Marketing Technology News: Is the Traditional CDP Already Out of Date?

GPU-Optimized Infrastructure for On-Prem and Regulated Deployments
IBM and NVIDIA are extending their data efforts to the infrastructure layer. NVIDIA has selected IBM Storage Scale System 6000 to provide 10PB of high-performance storage to serve massive data for its GPU-native advanced analytics engines, pairing IBM’s unified data access layer and massive parallel throughput with NVIDIA’s GPU pipelines. IBM Storage Scale 6000 is certified and validated on NVIDIA DGX platforms.1

For enterprises and governments requiring data residency and regulatory control, IBM and NVIDIA are exploring the integration of IBM Sovereign Core and NVIDIA infrastructure and NVIDIA Nemotron models that would focus on enabling GPU-intensive AI workloads that run entirely within regional boundaries – without compromising governance or compliance.

Advancing the Enterprise AI Stack with IBM, NVIDIA and Red Hat
IBM and NVIDIA are also deepening their partnership across cloud and enterprise consulting to advance clients’ enterprise AI adoption. IBM plans to offer NVIDIA Blackwell Ultra GPUs on IBM Cloud in early Q2 2026 for large-scale training, high-throughput inferencing, and AI reasoning. This technology will also be integrated across Red Hat AI Factory with NVIDIA, and VPC servers with enterprise-grade compliance and data residency controls.

Additionally, IBM Consulting plans to bring Red Hat AI Factory with NVIDIA to clients through IBM Consulting Advantage – an IBM enterprise AI platform that helps clients build and scale AI across their technology environments. Combined with Red Hat AI Factory with NVIDIA, the platform is built to simplify how companies prepare data, build models, and deploy AI, while also enhancing performance and oversight. This builds on IBM Consulting’s broader efforts to help clients maximize outputs from their AI investments.

Write in to psen@itechseries.com to learn more about our exclusive editorial packages and programs.

Hitachi Vantara Expands Hitachi iQ Capabilities to Help Enterprises Advance Responsible Agentic AI

0
Hitachi Vantara Expands Hitachi iQ Capabilities to Help Enterprises Advance Responsible Agentic AI

Expanded AI blueprints, infrastructure capabilities and intelligent data integration strengthen the Hitachi iQ portfolio for secure, on-prem production AI

Hitachi Vantara, the data storage, infrastructure and hybrid cloud management subsidiary of Hitachi Ltd., announced new capabilities across the Hitachi iQ portfolio, including enhanced AI blueprints and multi-agent coordination in Hitachi iQ Studio, expanded NVIDIA AI infrastructure options, and deeper data integration to support agentic AI in on-premises and virtualized environments. Together, these enhancements position Hitachi iQ as a comprehensive, enterprise-ready AI solution, enabling customers to build and manage AI agents within their own environments.

As organizations move from AI experimentation to scaled deployment, many are facing growing challenges tied to data complexity, AI sovereignty and evolving governance and security requirements. According to a recent report, in the U.S. and Canada, only 42% of organizations are considered data-mature, and 84% of those organizations report measurable AI ROI, compared with just 48% of organizations with weaker data foundations. As AI moves into production, the ability to pair strong data practices with secure, well-governed infrastructure is becoming a critical differentiator. The Hitachi iQ portfolio is designed to help close that gap by bringing together AI-ready infrastructure, integrated agent capabilities and enterprise-grade oversight and compliance controls designed for responsible enterprise AI deployments.

“AI is moving into production faster than many organizations’ data foundations are ready to support,” said Octavian Tanase, chief product officer, Hitachi Vantara. “With these latest enhancements to the Hitachi iQ portfolio, we are expanding across software innovation, high-performance infrastructure and intelligent data integration to give customers greater flexibility and control as they move agentic AI from pilot to production.”

Marketing Technology News: MarTech Interview with Miguel Lopes, CPO @ TrafficGuard

New Accelerated Computing Options for Modern AI Workloads
Hitachi iQ is designed to help enterprises deploy and operate AI infrastructure with predictable performance and reliability, built on Hitachi Vantara’s Virtual Storage Platform One (VSP One) data platform and supporting HMAX by Hitachi, a suite of next-generation solutions that brings the power of AI to social infrastructure. Hitachi iQ now supports NVIDIA Blackwell GPUs (air-cooled), NVIDIA Blackwell Ultra GPUs (air-cooled and liquid-cooled) and a 2U NVIDIA MGX-based system with up to four NVIDIA RTX PRO™6000 Blackwell Server Edition GPUs. Hitachi iQ also plans to support the newly announced NVIDIA RTX PRO™ 4500 Blackwell Server Edition GPU. These options give customers greater flexibility to align compute with their AI workloads – from model development and fine-tuning to inference and agentic applications – while supporting diverse form factors that address cooling, power and space constraints and meet enterprise requirements for security, resilience and production readiness.

Hitachi iQ integrates accelerated computing, networking and storage into a validated infrastructure stack. It is built to keep data close to compute, helping improve utilization and efficiency for data-intensive AI workloads.

New AI Blueprints and Data Orchestration in Hitachi iQ Studio
Hitachi iQ Studio, the AI software component of the Hitachi iQ portfolio, enables organizations to design, deploy and govern AI agents within secure enterprise environments. Built on the NVIDIA AI Data Platform reference design, it now includes expanded AI blueprints and multi-agent coordination capabilities that help teams move from prototype to production with greater clarity and control.

The new blueprints introduce defined agent roles, including supervisor and worker models. Worker agents execute tasks while supervisor agents coordinate multi-agent workflows and adapt based on outcomes. This structured orchestration helps organizations automate complex processes while maintaining visibility, efficiency and governance.

Hitachi iQ Studio also expands support for NVIDIA Nemotron models, large language models designed to power advanced, tool-using agentic AI systems, and introduces time machine capabilities that enable AI systems to navigate historical datasets with context and speed. This time-aware intelligence strengthens explainability and supports industries that rely on long-term data patterns to inform decisions.

“As enterprises continue to scale AI, the ability to combine accelerated computing with consistent software and trusted data becomes essential,” said Jason Hardy, vice president of storage technologies, NVIDIA. “Full-stack AI infrastructure optimized for enterprise demand enables organizations to support a wider range of AI outcomes while maintaining the performance, governance, and operational consistency enterprises require.”

Marketing Technology News: Is the Traditional CDP Already Out of Date?

Expanded Hammerspace Capabilities to Simplify, Automate and Accelerate Data Access
Building on their strategic partnership, Hitachi iQ delivers tighter integration between Hitachi iQ Studio and Hammerspace to streamline data access for agent-driven workflows. With this expanded capability, data managed by Hammerspace can be accessed directly within Hitachi iQ Studio using Model Context Protocol (MCP), an open standard that allows AI systems to securely connect to external data sources.

This enables customers to build AI agents in Hitachi iQ Studio that can securely work with and help manage their Hammerspace data environments, extending automation and insight directly to distributed data without requiring relocation. Data remains governed and protected within VSP One, helping maintain availability and consistent performance as agents operate across environments.

This deeper integration improves data observability and simplifies access to distributed datasets without adding infrastructure complexity, allowing AI agents to work with in-place data across environments without unnecessary data movement. The result is a stronger connection between data orchestration and agent coordination, supported by VSP One Block infrastructure to deliver consistent performance and 100% data availability while preserving hybrid cloud flexibility for enterprise AI.

Accelerating AI Storage
Hitachi Vantara will also be supporting the newly announced NVIDIA STX reference architecture to develop AI-native storage solutions powered by NVIDIA Vera Rubin, BlueField-4, Spectrum-X networking, and NVIDIA AI software.

Hitachi Vantara will showcase Hitachi iQ and Hitachi iQ Studio at NVIDIA GTC 2026, taking place March 16-19 in San Jose, California. Attendees can explore how Hitachi iQ simplifies and advances agentic AI development across industries.

Write in to psen@itechseries.com to learn more about our exclusive editorial packages and programs.

Dell AI Data Platform with NVIDIA Supercharges Enterprise AI with Breakthrough Data Orchestration and Storage Innovations

0
Dell AI Data Platform with NVIDIA Supercharges Enterprise AI with Breakthrough Data Orchestration and Storage Innovations

Dell AI Data Platform with NVIDIA advancements automate the complete AI data lifecycle and deliver extreme AI storage performance for demanding agentic AI workloads

Dell Technologies will support all of NVIDIA’s latest AI storage and data management innovations

Dell Technologies announces Dell AI Data Platform with NVIDIA advancements that help enterprises discover and activate enterprise data while delivering extreme storage performance to power AI applications and autonomous AI agents.

Why it matters
AI is rapidly shifting from assistive tools to autonomous, agentic systems, but its effectiveness is constrained by the data it can access, trust and act upon. Many enterprises hit a wall because much of their data remains trapped in silos, lacking structure, business context, and governance. The result: AI initiatives stall, investments underdeliver and competitive advantages slip away.

Dell and NVIDIA are removing one of the biggest blockers to enterprise AI: data that’s too slow, too siloed, or too messy to use. As a core component of the Dell AI Factory with NVIDIA, the Dell AI Data Platform with NVIDIA activates enterprise data for AI while maintaining security, governance, and best-in-class performance at scale. Customers see up to 12X faster vector indexing1, 3X faster data processing,2 and 19X faster time-to-first-token3 than traditional computing approaches.

Marketing Technology News: MarTech Interview with Miguel Lopes, CPO @ TrafficGuard

Automating the entire AI data lifecycle
Dell data engines, accelerated by NVIDIA AI infrastructure, automate the complete AI data lifecycle and dramatically reduce data preparation time while maintaining enterprise governance.

  • The Dell Data Orchestration Engine, powered by technology from Dell’s recent Dataloop acquisition, redefines how enterprises operationalize data for AI. The no-code, low-code engine orchestrates the AI data lifecycle—automatically discovering, labeling, enriching, and transforming structured, unstructured, and multimodal data into governed, AI-ready datasets at scale. By combining automated pipelines with active learning and human-in-the-loop workflows, organizations can continuously improve dataset quality and model accuracy while maintaining governance and control. The Data Orchestration Engine Marketplace lets organizations deploy production-ready data workflows without having to build them from scratch with a curated library of NVIDIA NIM microservices, NVIDIA AI Blueprints and more than 200 other models, applications and templates.
  • Dell Technologies supports the latest NVIDIA AI-Q blueprint, helping enterprises build customizable AI agents that deliver actionable insights for smarter decision-making. NVIDIA-accelerated data engine integrations in the Dell AI Data Platform enable high-performance data preparation, retrieval, and reasoning pipelines across structured and unstructured data. Customers also gain access to a growing library of pre-built NVIDIA blueprints and NIM microservices, along with the NVIDIA Nemotron 3 Super model on Dell Enterprise Hub on Hugging Face.
  • Dell Technologies will also support NVIDIA STX, a new modular reference design powered by next-generation NVIDIA Vera Rubin NVL72, NVIDIA BlueField-4 DPUs, and NVIDIA Spectrum-X™ Ethernet networking that accelerates how organizations manage, process, and retrieve data for AI.
  • The new AI Assistant within the Dell Data Analytics Engine brings conversational natural language interface directly into SQL analytics. Business users can query, visualize and collaborate on governed data products with a common semantic understanding of key metrics intuitively without specialized SQL knowledge. This democratizes data access, streamlines decision-making and unlocks deeper insights faster, which is particularly critical for organizations deploying AI agents that need to access structured data.
  • Within the Dell AI Data Platform with NVIDIA, the introduction of NVIDIA RTX PRO™ Blackwell Server Edition GPUs will bring acceleration directly into the data platform layer. Accelerated NVIDIA CUDA-X libraries including NVIDIA cuDF for structured data processing, and NVIDIA cuVS for vector indexing and search applied to unstructured data, work alongside Dell’s data engines and optimized infrastructure to deliver up to 3x faster SQL queries4 and 12x faster vector indexing.5 These technologies help organizations develop more responsive AI applications and improved infrastructure efficiency when processing and preparing data at scale.

Marketing Technology News: Is the Traditional CDP Already Out of Date?

Extreme-scale storage software innovations keep GPUs running at full speed
As enterprises move from AI experimentation to production deployment, storage becomes the critical constraint. Traditional storage architecture slows down as it scales, creating bottlenecks that leave GPUs idle and waste infrastructure investments. Dell’s AI-optimized storage engines solve this problem with purpose-built architectures that maintain performance at massive scale.

  • Dell Lightning File System, the world’s fastest parallel file system6, delivers extreme performance density for AI training and inferencing environments with up to 150 GB/second per rack7, up to 20X greater performance versus traditional flash-only scale out file competitors8 and up to 2X greater throughput per rack unit than competing parallel file systems.9 Purpose-built fabric architecture with direct storage access prevents slowdowns, keeping GPUs fully utilized at massive scale. Lightning FS integrates seamlessly into NVIDIA-based AI infrastructures, keeping training and inference workloads running at full speed.
  • Dell Exascale Storage, the only 3-in-1 storage built for extreme-scale AI and HPC10, gives IT teams the flexibility to deploy Dell’s best-of-breed file, object, and parallel file system storage software on the latest Dell PowerEdge servers. Customers can allocate Dell PowerScale, Dell ObjectScale, and/or Dell Lightning File System storage resources on a common hardware platform to support the most demanding AI and HPC environments like high-frequency trading and neoclouds. With support for NVIDIA CX-8 and CX-9 SuperNICs and planned network connectivity up to 800GbE, Exascale delivers read performance up to 6TB/second per rack11, providing the high throughput required by multimodal AI workloads.
  • NVIDIA CMX context memory storage platform support and inference acceleration with KV Cache on shared storage across Dell PowerScale, Dell ObjectScale and Dell Lightning File System allows organizations to offload KV cache from GPU memory to Dell CMX Storage and high-speed shared network storage based on performance needs. This dramatically improves GPU utilization for long-context and agentic AI workloads, allowing AI systems to maintain context across extended interactions without exhausting GPU memory. This capability is essential for enterprises deploying AI agents that need to reference extensive historical data or maintain long conversation threads.
  • PowerScale performance testing: New testing demonstrates that Dell PowerScale’s software-driven Parallel Network File System (pNFS) architecture delivers up to 6X faster performance with large files in enterprise AI environments compared to NFSv3.12 This keeps GPU-intensive AI workloads continuously fed with data, reducing bottlenecks across the entire pipeline and ensuring expensive GPU resources don’t sit idle waiting for data.

Dell AI Factory with NVIDIA delivers proven path to enterprise AI ROI
Dell Technologies today marks the two-year anniversary of the Dell AI Factory with NVIDIA with advancements spanning its end-to-end AI infrastructure, software, solutions, and services portfolio that help enterprises move AI from pilot to production at scale. With over 4,000 customers deploying the Dell AI Factory, and early adopters seeing up to 2.6x ROI within the first year13, Dell proves that an end-to-end approach delivers measurable business results.

Write in to psen@itechseries.com to learn more about our exclusive editorial packages and programs.

Switch Integrates NVIDIA Omniverse DSX Blueprint into Switch’s EVO AI Factories

0
Switch Integrates NVIDIA Omniverse DSX Blueprint into Switch's EVO AI Factories

Switch’s Living Data Center (LDC) EVO transforms AI factories from human-managed infrastructure to an automated, intelligent system.

Switch announced that they have integrated the NVIDIA Omniverse DSX Blueprint into their EVO AI Factory™ architecture and LDC EVO™ operating system. LDC EVO, combined with NVIDIA Omniverse libraries and OpenUSD, delivers high-fidelity operations across Switch’s deployed portfolio. LDC EVO’s workflows, intelligence and modeling deliver live, physics-accurate visual representation of the EVO AI Factory.

Traditional data centers run on DCIM, or data center infrastructure management, where humans make decisions assisted by monitoring tools. AI factories operate at extreme density, creating operational complexity that exceeds what DCIM was designed to manage. LDC EVO replaces this model. LDC EVO presents the automation of every system in the facility in near real-time, maintaining an updated 3D digital twin of the complete AI factory, providing our people with unprecedented support and capabilities.

Every NVIDIA DGX deployment requires a facility engineered to its specifications. Switch’s EVO AI Factory is that facility. Switch enables its customers to deploy NVIDIA accelerated computing on Dell PowerEdge servers at extreme density from day one. Switch helped deliver deployments of NVIDIA Grace Blackwell on Dell PowerEdge servers in EVO AI Factories. LDC EVO presented capabilities to allow its customers to validate these hardware configurations before physical deployments.

Marketing Technology News: MarTech Interview with Miguel Lopes, CPO @ TrafficGuard

Leadership Perspectives

“LDC EVO is the operating system for Switch’s EVO AI Factory, orchestrating the modular and configurable campus architecture that enables hybrid cooling and supports extreme AI densities,” said Zia Syed, Chief Technology Officer of Switch. “It’s built to operate every generation of NVIDIA reference design, including the Rubin DSX architecture. Leveraging NVIDIA Omniverse libraries and OpenUSD for digital twins, we’ve layered in automation workflows and operational intelligence to unify deployments. LDC EVO presents dynamic operations of an AI Factory at scale.”

“Gigawatt-scale AI factories require a shift toward autonomous, telemetry-driven infrastructure capable of orchestrating extreme power and cooling densities in real time,” said Vladimir Troy, Vice President of AI Infrastructure at NVIDIA. “The integration of the NVIDIA Omniverse DSX blueprint into the Switch LDC EVO operating system provides the high-fidelity simulation and operational intelligence necessary to optimize the deployment of next-generation NVIDIA AI infrastructure.”

Marketing Technology News: Disrupt or Be Disrupted: The AI Wake-Up Call for B2B Marketers

The Switch Ecosystem

We brought together the expertise of leading suppliers across the AI infrastructure ecosystem including NVIDIA, Dassault Systèmes, Cadence, ETAP, Schneider Electric, SUSE, Dell Technologies, Oxide Computer Company and Procore Technologies, Inc.

Within LDC EVO, these collaborating technologies operate as integrated capabilities: thermal modeling, electrical simulation, reality capture, construction lifecycle management and facility telemetry are synchronized into a single presentational environment. The result is that teams can simulate, monitor and adjust operations—all within one interface that improves every operational cycle.

This will be showcased at NVIDIA GTC 2026, where Switch will feature its EVO AI Factory in the DSX AI Infrastructure Pavillion, Booth #91.

Write in to psen@itechseries.com to learn more about our exclusive editorial packages and programs.

LangChain Announces Enterprise Agentic AI Platform Built with NVIDIA

0
LangChain Announces Enterprise Agentic AI Platform Built with NVIDIA

Comprehensive agent engineering platform combined with NVIDIA AI enables enterprises to build, deploy, and monitor production-grade AI agents at scale

LangChain, the agent engineering company behind LangSmith and open-source frameworks that have surpassed 1 billion downloads, announced a comprehensive integration with NVIDIA to deliver an enterprise-grade agentic AI development platform. As part of this collaboration, LangChain is also joining the Nemotron Coalition, NVIDIA’s global initiative to advance frontier open AI models through shared expertise, data, and compute.

LangChain Announces Enterprise Agentic AI Platform Built with NVIDIA.

The collaboration combines LangChain’s LangSmith agent engineering platform and its open-source frameworks (Deep Agents, LangGraph, and LangChain) with NVIDIA Agent Toolkit, including NVIDIA Nemotron models, NVIDIA NeMo Agent Toolkit profiling and optimization, NVIDIA NIM microservices, and NVIDIA Dynamo giving developers a complete stack to build, deploy, and continuously improve AI agents in production. The platform also incorporates NVIDIA OpenShell, a secure runtime that sandboxes autonomous, self-evolving agents with policy‑based guardrails. Development teams often spend months building custom infrastructure rather than delivering business value. The LangChain-NVIDIA platform is designed to close that gap.

Marketing Technology News: MarTech Interview with Nicholas Kontopoulous, Vice President of Marketing, Asia Pacific & Japan @ Twilio

What the Platform Delivers

Build with LangGraph, Deep Agents, and AI-Q: The combined LangChain-NVIDIA stack enables developers to build agents at increasing levels of complexity. LangGraph provides a runtime for stateful multi-agent orchestration with complex control flows and human-in-the-loop patterns. Deep Agents, LangChain’s agent harness, goes further with built-in task planning, sub-agent spawning, long-term memory, and context management, enabling agents that run for minutes or hours across dozens of steps. Building on top of Deep Agents, NVIDIA AI-Q Blueprint is the flagship result of this collaboration: a full production enterprise deep research system that ranks #1 on deep research benchmarks. NeMo Agent Toolkit lets teams onboard existing LangGraph agents with minimal code changes and immediately access advanced profiling, evaluation, and MCP/A2A protocol support for composing multi-agent systems.

Accelerate LangGraph with NVIDIA: The LangChain NVIDIA software package provides NVIDIA-optimized execution strategies applied at compile time with no changes to node logic or graph edges. Parallel execution automatically identifies independent nodes and runs them concurrently, eliminating sequential bottlenecks. Speculative execution runs both branches of conditional edges simultaneously, discarding the wrong branch once the routing condition resolves. Together, these optimizations significantly reduce end-to-end latency for complex multi-step agent workflows.

Deploy with NVIDIA NIM: NIM microservices deliver up to 2.6x higher throughput compared to standard deployments across cloud, on-premise, and hybrid environments. Nemotron 3 Super’s MoE architecture enables cost-efficient deployment on a single GPU. NVIDIA NeMo Agent Toolkit adds production-readiness features including authentication, rate limiting, and a built-in UI for debugging deployed workflows. The toolkit’s GPU cluster sizing calculator lets teams profile their LangGraph workflows under load and forecast exact hardware requirements for scaling from a single user to thousands of concurrent sessions.

Monitor with LangSmith and NeMo Agent Toolkit: LangSmith, which has processed over 15 billion traces and 100 trillion tokens, provides application-level observability: distributed tracing, cost and latency monitoring, Insights Agent for automatically detecting usage patterns and failure modes on a recurring schedule, Polly for natural-language debugging and prompt engineering, and LangSmith CLI for working with trace data. The NeMo Agent Toolkit observability system natively exports telemetry to LangSmith, creating a unified view where infrastructure-level profiling (token usage, timing, throughput down to individual tokens) combines with LangSmith’s application-level tracing and AI-powered analysis in a single platform. To ensure enterprises have the right tools to embrace responsible AI practices, NVIDIA NeMo Guardrails integrates out of the box with LangChain, enabling teams to enforce content safety and policy compliance while customizing guardrails per use case.

Marketing Technology News: The ‘Demand Gen’ Delusion (And What To Do About It)

Evaluate across the Nemotron model family: LangSmith and NeMo Agent Toolkit together provide comprehensive evaluation across the full agent lifecycle. LangSmith supports offline evaluation (human review, LLM-as-judge, pairwise comparison, CI/CD integration via pytest/Vitest/GitHub workflows) and online evaluation including multi-turn evals that score entire conversation trajectories for task completion and decision quality. NeMo Agent Toolkit complements this with RAG-specific evaluators, agent trajectory analysis, and a hyper-parameter and prompt optimizer. These capabilities are especially powerful when applied across the Nemotron model family: teams can benchmark the same agent across Nemotron 3 Nano (30B/3B active), Super (~100B/10B active), and Ultra (~500B/50B active), measuring tradeoffs between accuracy, latency, and cost to right-size model selection per task, then use NeMo Agent Toolkit’s automatic reinforcement learning to fine-tune the chosen Nemotron model for their specific workflows.

Write in to psen@itechseries.com to learn more about our exclusive editorial packages and programs.

Retailers Are Missing Revenue by Getting Personalisation Wrong

0
Retailers Are Missing Revenue by Getting Personalisation Wrong

New Amperity research reveals real-time relevance drives conversion, but identity and execution gaps continue to hold brands back

Australian retailers are keen to capitalise on the power of personalisation, but execution remains challenging. Most acknowledge their capabilities are lagging, even as they recognise personalisation as critical to business success.

New research from Amperity, the leading customer data cloud for consumer brands, offers insights into what drives personalisation effectiveness. The study finds that real-time personalisation has become a direct revenue lever, influencing purchase behaviour and retention when retailers act on customer intent in the moment.

The 2026 State of Personalisation in Retail report, based on a survey of 1000 U.S. consumers, reveals that personalisation only delivers meaningful impact when it reflects live customer intent, not static profiles or delayed batch updates.

The findings are also relevant to Australian audiences, as this market grapples with similar strategic priorities around personalisation while facing execution challenges that prevent many retailers from delivering on customer expectations.

Key findings reveal how missed moments are costing retailers revenue

Real-time personalisation directly drives conversion:

  • 74% of consumers are more likely to purchase when they receive a truly personalised offer or recommendation

  • 69% are more likely to buy when retailers adjust offers instantly while they browse

High-intent moments are being missed:

  • 57% say shopping experiences still feel generic, despite retailers claiming to personalise

  • 79% report that retailers frequently get personalisation wrong, citing irrelevant or mistimed messages

Consumers expect recognition, but rarely get it:

  • 83% want retailers to remember them, including preferences and past purchases

The data shows a growing disconnect between what shoppers expect in moments like browsing, cart consideration, and email engagement, and what retailers actually deliver. When brands fail to act in these moments, they create friction and lose potential revenue.

Marketing Technology News: MarTech Interview With Fredrik Skantze, CEO and Co-founder of Funnel

More than half of consumers believe brands should personalise their experience in real-time rather than days later, and nearly one-third expect relevant offers to start from their very first interaction. Email remains the preferred channel for personalised outreach, placing even greater pressure on accuracy and timing.

AI is, of course, also expected to play a growing role in personalisation, and consumers favour a balanced approach. Nearly half want personalisation delivered through a combination of human associates and AI assistants, reinforcing the need for systems that blend automation with human judgement to deliver relevance and trust.

Australian market reflects similar challenges, with critical gaps in execution

While this global consumer research reveals the scale of the personalisation opportunity, Australian research that Amperity participated in last year with Arktic Fox shows local retailers face structural barriers to capitalising on it.

The Digital, Marketing & eComm in Focus 2025 report found that 88% of Australian retailers view personalisation as important or very important to their business, yet 57% of marketing leaders overall say their personalisation capability is lagging in the market. This capability gap persists despite 59% of brands experimenting with or scaling AI and GenAI to drive personalisation efforts.

The research revealed a critical disconnect in how retailers approach the foundation of personalisation. While more than half of all brands prioritise unifying customer data, only 25% consider identity resolution a key area of investment.

For Amperity Area Vice President and General Manager for Australia, Billy Loizou, this disconnect is exactly why personalisation continues to underdeliver.

“For companies generating more than a billion dollars in revenue, unifying customer data was ranked as the top priority. But identity resolution barely made the list,” he said.

Marketing Technology News: The Death of Third-Party Cookies Was Just the Start. Are You Ready for Consent Orchestration?

“That’s a real concern. You can’t talk about a unified customer view if you don’t know with certainty who the customer actually is. Identity resolution is what turns fragmented data into something usable. Without it, personalisation is guesswork and AI simply scales the noise.

“If retailers want real-time relevance that drives conversion and loyalty, they need to invest in the foundation first. Otherwise, they’re building advanced capabilities on unstable ground.”

The challenge is compounded by resource constraints. Marketing and digital budgets for Australian retailers have remained the same or declined over the past 12 months for 78% of brands, while 65% cite balancing short-and-long-term priorities as their biggest challenge.

Despite the focus on AI for personalisation, only 17% of Australian retailer marketing and digital leaders believe they are effectively leveraging AI to optimise digital content creation processes.

“With budgets under pressure, retailers can’t afford to invest in capabilities that don’t convert,” Loizou said.

“The global findings reinforce what we’re seeing locally. Real-time personalisation drives revenue, but only when the identity foundation is solid. The brands that get this right will grow. The ones that don’t will keep wondering why their AI investments aren’t paying off.”

Download the 2026 State of Personalisation in Retail report to explore how real-time, data-driven personalisation affects purchasing decisions, loyalty, and customer trust, and what retailers must do to close the execution gap in 2026 and beyond.

Amperity’s Customer Data Cloud empowers brands to transform raw customer data into strategic business assets with unprecedented speed and accuracy. Through AI-powered identity resolution, customizable data models, and intelligent automation, Amperity helps technologists eliminate data bottlenecks and accelerate business impact. More than 400 leading brands worldwide, including Alaska Airlines, DICK’S Sporting Goods, BECU, Virgin Atlantic, and Wyndham Hotels & Resorts, rely on Amperity to drive customer insights and revenue growth. Founded in 2016, Amperity operates globally with offices in Seattle, New York City, London, and Melbourne.

Write in to psen@itechseries.com to learn more about our exclusive editorial packages and programs.

MarTech Interview with Sherry Smith, President of Retail Media @ Criteo

0
MarTech Interview with Sherry Smith, President of Retail Media @ Criteo

Sherry Smith, President of Retail Media at Criteo shares more on how marketers today can drive better results with agentic AI powered experiences:

___________

Hi Sherry, tell us about yourself and your role at Criteo.

My career has grown alongside retail media itself. I was part of the early days of the industry, helping build some of the first retail media programs with Walmart and later leading Triad Retail Media. Back then, we were proving that retailers could turn their first-party data and shopper relationships into a powerful growth engine for brands.

Over the past two decades, I’ve seen retail media evolve from a nascent idea into a core pillar of modern commerce. As President of Retail Media at Criteo, I focus on helping retailers and brands scale that opportunity globally and build for the future of commerce, where retail media plays a central role in driving growth, loyalty, and measurable results across every touchpoint.

How is Retail Media shaping up today, and what top trends will define the market through 2026?

Retail media is entering its next phase of maturity. Over the past decade, growth has been fueled by sponsored search, onsite display, offsite media activation, and marketplace advertising. But as commerce becomes more connected and responsive to shopper behavior, discovery is evolving beyond simple keyword search toward more intuitive, personalized experiences.

Looking toward 2026, I see three major shifts shaping the market.

First, retail media will become more seamlessly embedded across digital touchpoints. This will support richer product discovery experiences while preserving retailer control over inventory, pricing, and shopper relationships.

Second, we’ll see the emergence of new, more native ad formats that feel less like traditional ads and more like helpful recommendations, creating incremental opportunities for brands rather than simply reallocating existing spend.

Third, advanced automation and optimization will become essential. As digital shelf space becomes more competitive, retailers will rely on sophisticated decisioning systems to balance sponsored and organic results, maximize performance, and protect the customer experience.

For brands resetting their agentic commerce workflows and experience: what top tips would you share with them?

As agentic commerce evolves, brands should start by recognizing that discovery is becoming more conversational and context-driven, but it is still anchored in retailer environments. AI-driven experiences rely heavily on structured product data, clear attributes, and strong content signals. Brands that invest in making their product information accurate, differentiated, and easy to interpret will be better positioned as recommendations become more dynamic.

Marketing Technology News: MarTech Interview With Fredrik Skantze, CEO and Co-founder of Funnel

Can you talk about a few brands from around the world who you’ve seen build unique retail experiences with agentic AI?

While retailers ultimately own and operate the commerce experience, we’re seeing innovative brands lean into these new environments in thoughtful ways.

In markets like the U.S., as retailers introduce more guided or conversational shopping features, leading brands are investing in richer product content, enhanced attributes, and contextual storytelling that help their products surface naturally within those experiences.

Globally, the brands that stand out aren’t necessarily building standalone AI experiences themselves. Instead, they’re partnering closely with both retailers and emerging players like LLMs to support incremental discovery and ensure their brand is presented in these new shopping environments.

Five thoughts on the future of retail from your perspective?

First, retailers remain central to commerce because they control the fundamentals: trust, pricing, loyalty, fulfillment, and customer relationships. Technology will continue to evolve, but those assets are enduring competitive advantages.

Second, discovery will continue to diversify. Consumers will move fluidly across retailer sites, marketplaces, social platforms, and emerging interfaces depending on need and context. Winning retailers will meet shoppers wherever they are while maintaining a consistent, trusted experience.

Third, trust will become an even more powerful economic driver. As commerce grows more personalized and automated, transparency and reliability will directly influence conversion, loyalty, and long-term brand value.

Fourth, digital shelf space will become more strategic. As assortments expand and attention becomes scarcer, retailers and brands will need smarter merchandising, better data, and more sophisticated optimization to ensure relevance and performance.

Finally, retail media will solidify its role as a foundational revenue engine. When integrated thoughtfully into the commerce experience, it strengthens partnerships with brands and supports sustainable, incremental growth.

Top of mind best practices for brands looking to optimize their retail media outlook and output in 2026.

Brands need to think beyond campaigns and focus on impact. In 2026, the winners will be those who align retail media investment with merchandising strategy, category growth, and customer lifetime value — not just short-term ROAS.

They should also move early on emerging formats and experiences, but with discipline. Testing is critical, yet every activation should be measured against incrementality and long-term brand equity.

Most importantly, retail media performance will hinge on partnership. The brands that treat retailers as strategic growth collaborators, rather than media channels, will unlock the greatest value.

Marketing Technology News: The Death of Third-Party Cookies Was Just the Start. Are You Ready for Consent Orchestration?

Criteo

Criteo is the global commerce media company that enables marketers and media owners to drive better commerce outcomes.

About Sherry Smith

Sherry Smith is President of Retail Media at Criteo

Gartner Marketing Survey Finds 50% of Consumers Prefer Brands That Avoid Using GenAI in Consumer-Facing Content

0
Dotgo Launches AI-Powered Content Filtering System for RCS

As Consumers Increasingly Question What’s “Real,” Marketers Should Make AI Transparent, Optional and Clearly Beneficial

Half of U.S. consumers (50%) say they would prefer to give their business to brands that do not use GenAI, according to a survey by Gartner, Inc., a business and technology insights company. In this survey, “brands that use GenAI” refers to brands incorporating GenAI in consumer-facing messages, advertising and content.

The finding suggests that growing consumer use of GenAI does not automatically translate into comfort with AI-powered brand experiences.

A Gartner survey of 1,539 U.S. consumers conducted in October 2025 revealed that consumer skepticism is also intensifying more broadly, creating a high-risk environment for synthetic or unsubstantiated brand claims. Sixty-one percent (61%) of consumers say they frequently question whether the information they use to make everyday decisions is reliable, and 68% frequently wonder whether the content and information they see is real.

Marketing Technology News: MarTech Interview With Fredrik Skantze, CEO and Co-founder of Funnel

“Marketers should treat GenAI as a trust decision as much as a technology decision,” said Emily Weiss, Senior Principal Analyst in the Gartner Marketing practice. “Consumers are questioning what’s real and making efforts to verify more of what they see. The brands that win will be the ones that use AI in ways customers can immediately recognize as helpful, while being transparent about when AI is used, what it’s doing, and giving customers a clear choice to opt out.”

Marketing Technology News: The Death of Third-Party Cookies Was Just the Start. Are You Ready for Consent Orchestration?

Consumers’ heightened skepticism is also changing how they assess truth. By the end of 2025, only 27% of consumers said they determine whether information is true using intuition, reflecting a growing shift toward independent checking, and verification behaviors.

“To reduce risk and build trust, marketers should make GenAI optional rather than mandatory, start with clearly assistive use cases that deliver immediate customer value, and label AI-driven experiences, so people understand when and how AI is being used,” advised Weiss. “Marketers should also make verification easy by backing claims with clear proof points and governance, because consumers are increasingly skeptical about what they see and hear. When AI is transparent, helpful, and in the customer’s control, it can strengthen the experience instead of weakening trust.”

Write in to psen@itechseries.com to learn more about our exclusive editorial packages and programs.

Dialpad Names Brett Reed as Chief Financial Officer to Lead Next Era of Global Enterprise Scale

0
Dialpad Names Brett Reed as Chief Financial Officer to Lead Next Era of Global Enterprise Scale

Dialpad Logo

Dialpad, the AI-first communications and agentic platform defining the next era of business conversations and actions, announced the appointment of its interim finance leader, Brett Reed, as Chief Financial Officer. The move formalizes Dialpad’s finance leadership as the company scales to meet growing demand from the world’s largest enterprise organizations.

Marketing Technology News: MarTech Interview With Fredrik Skantze, CEO and Co-founder of Funnel

“The enterprise market is moving past the experimental phase of AI and into a phase of rigorous execution.” Craig Walker, CEO and Co-founder of Dialpad

“The enterprise market is moving past the experimental phase of AI and into a phase of rigorous execution,” said Craig Walker, CEO and Co-founder of Dialpad. “Brett is a high-impact leader who has a strong reputation with the investment community and knows exactly what it takes to scale a world-class organization. By naming him CFO, we’re ensuring Dialpad has the financial discipline and the ‘A-team’ in place to get after the massive opportunity in Agentic AI for our customers.”

Marketing Technology News: The Death of Third-Party Cookies Was Just the Start. Are You Ready for Consent Orchestration?

With more than 20 years of financial management experience, including Salesforce and Vlocity, Reed brings the strategic rigor and pattern recognition required for Dialpad’s continued expansion into the public-scale enterprise market. His leadership ensures that as Dialpad’s customers transition to autonomous AI solutions, they are supported by a partner built for long-term scale and enterprise-grade performance.

“Dialpad has built the most complete and AI-native platform the way business actually works – and with our leading agentic capabilities, we are defining how businesses will work in the future,” said Brett Reed, CFO. “I am thrilled to take on this role as we streamline our operations to deliver the performance, security, and measurable value that our enterprise customers demand.”

Write in to psen@itechseries.com to learn more about our exclusive editorial packages and programs.

Adobe and NVIDIA Announce Strategic Partnership to Deliver the Next Generation of Firefly Models and Creative, Marketing and Agentic Workflows

0
CloudWave rebrands to NeonNow as it launches partner-led AI CX platform across 170 markets

Adobe, Inc.

  • Adobe will use NVIDIA’s advanced computing technology and libraries to deliver the next generation of foundational Adobe Firefly models that offer best-in-class precision and control for creativity and marketing pipelines.

  • Adobe and NVIDIA will collaborate to deliver breakthrough agentic creative and marketing workflows for content, campaign and production speed.

  • Adobe will build a cloud-native, brand identity-preserving 3D digital twin solution purpose-built for marketing, built on NVIDIA Omniverse libraries.

  • Adobe Firefly Foundry to integrate NVIDIA’s advanced computing and AI technologies to power enterprise‑grade custom AI that delivers commercially safe content at scale.

Adobe and NVIDIA announced a strategic partnership to accelerate AI-powered creation, production and personalization, including delivering the next generation of foundational Adobe Firefly models and agentic workflows.

The partnership will bring together Adobe’s creative and marketing workflows, models and technology and NVIDIA’s open models, libraries, research and accelerated computing, as demand for content continues to surge and generative AI reshapes creative and marketing workflows.

Through this partnership, Adobe and NVIDIA will advance the creative industry by developing next-generation Firefly models that will deliver best-in-class creative precision and control for creativity and marketing pipelines. The models will be built on NVIDIA’s advanced computing technology and tap into NVIDIA CUDA-X™, NVIDIA NeMo™ libraries, NVIDIA Cosmos™ open models and NVIDIA Agent Toolkit software to enable the interactive, high-quality creation customers expect.

“Content creation is exploding, and our partnership with NVIDIA is grounded in a shared vision to reinvent creative and marketing workflows with the power of AI,” said Shantanu Narayen, chair and CEO, Adobe. “As AI transforms how marketing teams and media and entertainment studios work, Adobe and NVIDIA will bring together our Firefly models, CUDA libraries into our applications, 3D digital twins for marketing, and Agent Toolkit and Nemotron to our agentic frameworks to deliver high-quality, controllable and enterprise-grade AI workflows of the future.”

“AI is giving every industry the ability to redefine what’s possible,” said Jensen Huang, founder and CEO, NVIDIA. “For more than 20 years, NVIDIA and Adobe have partnered to push the boundaries of design and creativity. Today, we are taking that partnership to a new level — uniting our research and engineering teams to accelerate Adobe’s beloved applications with NVIDIA CUDA and jointly build state-of-the-art world foundation models that reimagine creativity and transform customer experiences.”

Adobe and NVIDIA will collaborate to deliver breakthrough agentic creative and marketing workflows for content, campaign and production speed. Adobe will explore NVIDIA Agent Toolkit software and NVIDIA Nemotron™ open models to power these agentic workflows.

Adobe and NVIDIA will also work together on NVIDIA NemoClaw — an open source stack that simplifies running OpenClaw always-on assistants more safely, with a single command. As part of the NVIDIA Agent Toolkit, it installs the NVIDIA OpenShell runtime — a secure environment for running autonomous agents and open source models like NVIDIA Nemotron.

Marketing Technology News: MarTech Interview With Fredrik Skantze, CEO and Co-founder of Funnel

In partnership with NVIDIA, Adobe is launching a cloud-native, brand identity-preserving 3D digital twin solution (public beta). The solution creates virtual replicas of physical products that act as permanent digital identities for marketing and commerce experiences. Integrating NVIDIA Omniverse™ libraries into Adobe technologies, the collaboration expands support for 3D digital twin workflows built on OpenUSD for marketing content automation.

With seamless interoperability across tools, brands can generate everything from consistent pack shots and lifestyle imagery to configurable 3D product experiences and immersive virtual try-ons.

Adobe will also harness NVIDIA AI infrastructure, AI libraries, services and models to accelerate and optimize every layer of its AI-powered tools across creativity, productivity and customer experience orchestration — including Adobe Acrobat, Photoshop, Premiere Pro, Frame.io, Adobe Firefly Foundry, Adobe GenStudio and Adobe Experience Platform.

With Adobe Firefly Foundry, Firefly’s commercially safe AI models are deeply tuned with a company or IP owner’s unique, proprietary brand or franchise content, which is critical for media and entertainment studios. Adobe Firefly Foundry will integrate NVIDIA’s advanced computing and AI technologies to power enterprise‑grade custom AI that delivers commercially safe content at scale.

Marketing Technology News: The Death of Third-Party Cookies Was Just the Start. Are You Ready for Consent Orchestration?

Key areas of the strategic partnership include:

  • Deliver the next generation of Adobe Firefly models: Adobe will use NVIDIA’s advanced computing and AI technologies to deliver the next generation of Adobe Firefly models designed for creativity, productivity and marketing. The models will be built on NVIDIA’s advanced computing technology, and NVIDIA CUDA-X® and NeMo libraries.
  • Adobe agentic AI innovation: Adobe will explore NVIDIA OpenShell™ and Nemotron — part of NVIDIA Agent Toolkit — as foundations for hybrid, long-running agentic loops in a personalized, secure and cost-efficient environment. Adobe will also evaluate Agent Toolkit and Nemotron for large-scale agentic workflows powered by Adobe Experience Platform. NVIDIA will provide engineering expertise, early access to software and targeted go-to-market support.
  • Transform marketing content creation with high-fidelity, cloud-native 3D digital twins: By unifying NVIDIA accelerated computing, NVIDIA Omniverse libraries for OpenUSD universal data interchange, NVIDIA RTX™ rendering and NVIDIA Omniverse Kit App Streaming for real-time cloud streaming with Adobe’s generative AI and workflow platforms, Adobe’s solution will produce cloud-native, brand-identity-preserving, 3D digital twins for marketing content automation.
  • Unlock deep-tuned creative possibilities for enterprises at scale: Using NVIDIA accelerated computing, CUDA-X acceleration libraries and open models, Adobe will deliver faster, higher-performing and more flexible proprietary and IP-protected Firefly Foundry models across image, video, audio, vector and 3D to brands and franchises.
  • Advance business productivity with AI document intelligence: Adobe Acrobat is the productivity and collaboration platform to help customers get their best work done. Adobe will bring NVIDIA Nemotron capabilities to Adobe Acrobat to further elevate the quality of AI output and increase productivity for business professionals, consumers and enterprises.
  • Accelerate creative workflows in the cloud: Frame.io is Adobe’s single platform to centralize content, people and feedback across the creative process to accelerate quality output. Adobe will accelerate Frame.io’s scalable cloud content management and workflows, media decoding and intelligence with NVIDIA CUDA® — powering fast semantic search, generative creation and insights across image, video, 3D and other creative media types at scale.
  • Develop joint go-to-market strategy: Adobe and NVIDIA will develop a joint go-to-market strategy that drives access and adoption of these AI innovations by enterprise customers worldwide with Adobe Firefly Foundry.

Write in to psen@itechseries.com to learn more about our exclusive editorial packages and programs.

Starburst Announces Day-One Support for Delivering Unmatched AI Inference and Analytics Performance with NVIDIA Vera CPU

0
Starburst Announces Day-One Support for Delivering Unmatched AI Inference and Analytics Performance with NVIDIA Vera CPU

Starburst Logo

Starburst becomes the first open hybrid lakehouse platform optimized for NVIDIA’s new inference compute platform, enabling customers to run real-time AI and analytics on governed, federated data at unprecedented speed

Starburst, a leader in data and AI platforms, announced optimizations for NVIDIA Vera CPU, unveiled at NVIDIA GTC. Starburst customers will gain access to breakthrough query performance, lower-latency AI inference, and significant cost efficiencies from day one of Vera availability later in 2026.

Starburst’s Trino-powered platform is uniquely positioned to capitalize on Vera, NVIDIA’s next-generation data center CPU designed for fast agentic reasoning and data analytics. While competing platforms require data to be centralized into proprietary warehouses before it can power AI, Starburst delivers hybrid, federated, and governed data access at inference speed directly where the data lives, across lakes, warehouses, and operational systems, without movement or duplication.

As enterprise demand shifts from model training to production inference powering agentic AI, retrieval-augmented generation (RAG), and real-time applications, NVIDIA Vera addresses the performance and cost constraints that have limited AI adoption at scale.

“The future of enterprise AI depends on fast access to governed data,” said Justin Borgman, Founder and CEO of Starburst. “With NVIDIA Vera, Starburst aims to bring real-time analytics and inference directly to where that data lives.”

Marketing Technology News: MarTech Interview With Fredrik Skantze, CEO and Co-founder of Funnel

Benefits for Starburst Customers

Early benchmark testing shows Starburst on NVIDIA Vera CPU delivering materially faster query performance and significantly higher CPU efficiency than comparable traditional x86 CPU–based configurations, while maintaining predictable performance for mixed BI and AI inference workloads. For customers, this translates into four core advantages:

  • Faster time to AI value. Live, governed enterprise data feeds inference workloads, RAG pipelines, and agentic AI in real time, eliminating the ETL bottleneck that delays AI projects.
  • Lower total cost of ownership. Vera’s efficient architecture, paired with Starburst’s federated model, which avoids costly data duplication, compounds savings across compute and storage.
  • Predictable performance at scale. Deterministic throughput for concurrent analytics and AI workloads, even across complex, multi-source datasets.
  • Built-in enterprise governance. Fine-grained access controls and policy enforcement travel with every query, ensuring AI applications consume only authorized, compliant data as organizations scale into regulated environments.

“The future of enterprise AI relies on securing instant insights from data that is often distributed across complex hybrid environments,” said Dion Harris, senior director, HPC, Cloud and AI Infrastructure, NVIDIA. “Our work with Starburst, optimizing their Trino-powered platform for NVIDIA Vera CPU, will deliver a foundational solution for real-time, federated data processing.”

Marketing Technology News: The Death of Third-Party Cookies Was Just the Start. Are You Ready for Consent Orchestration?

“With Trino running on NVIDIA Vera CPU, we’re unlocking a new level of performance and efficiency for federated data access, enabling customers to run analytics and query data directly where it lives – the result is faster insights, lower infrastructure overhead, and a platform built to scale with growing enterprise needs,“ said Jitender Aswani SVP of Engineering, Starburst.

Competitive Differentiation

Starburst creates a position that neither proprietary cloud warehouses nor legacy platforms can replicate. Closed architectures require enterprises to copy data into a single system before AI can use it, adding cost, latency, and governance gaps. Legacy Hadoop and Spark ecosystems lack a native inference path entirely. Starburst eliminates both limitations with an open, hybrid, and federated architecture built on Trino, now optimized for the most advanced inference compute available.

Ecosystem Integration

Starburst’s optimization extends across validated enterprise architectures, including the Dell AI Factory with NVIDIA and the Dell AI Data Platform, where Starburst serves as the analytics, data access, and governance engine. Customers deploying Vera within these configurations gain a fully integrated, inference-ready stack from infrastructure to insight.

Building on GPU-Accelerated Trino

In addition to support for Vera CPU, Starburst is developing GPU acceleration for Trino using NVIDIA CUDA, and NVIDIA cuDF for structured data. This future capability will bring large-scale parallelism and accelerated columnar processing to federated analytics and AI inference, powering the next generation of retrieval-augmented and agentic AI workloads.

The initiative reflects Starburst’s vision to unify compute across CPUs and GPUs within a single open engine–enabling enterprises to harness governed, high-performance data processing at the speed of innovation.

Write in to psen@itechseries.com to learn more about our exclusive editorial packages and programs.

Netris Announces Integration and Support for NVIDIA DSX Air

0
Netris Announces Integration and Support for NVIDIA DSX Air

Netris Logo

Integration brings Netris network automation and multi-tenancy into NVIDIA DSX Air, enabling customers and partners to develop, integrate, and validate the full ecosystem ahead of deployment

Netris, the leading provider of network automation and multi-tenancy for AI infrastructure, announced integration and support for NVIDIA DSX Air, the infrastructure simulation platform announced by NVIDIA at GTC 2026.

“Netris integration with NVIDIA DSX Air provides a powerful environment to architect and validate the entire networking stack in parallel with hardware delivery, drastically accelerating time to value.” – Amit Katz, VP of Networking at NVIDIA

NVIDIA DSX Air enables teams to simulate large-scale AI infrastructure at full fidelity, including network orchestration, automation, and multi-tenancy; Kubernetes orchestration; storage, security, observability, and other ecosystem partner solutions. The platform runs at scale, allowing teams to model complex, multi-tenant AI environments that mirror real-world deployments.

Netris deploys on NVIDIA DSX Air as the network orchestration, automation, and multi-tenancy layer across the entire AI networking stack: Ethernet (including NVIDIA Spectrum-X Ethernet), NVL72, NVIDIA BlueField DPUs, and Virtual and Edge Networking. Netris enables operators to enforce hard multi-tenancy with hardware-level isolation, provision tenants with instant network isolation, and dynamically reallocate GPU capacity across tenants. All of this can now be developed, tested, and validated before hardware is delivered.

The integration prepares teams across the full deployment lifecycle from ISV integration and customer evaluation, through pre-deployment validation (day 0), to accelerated go-live (day 1) and ongoing change management (day 2), accelerating time to monetization and reducing deployment risk.

Why Simulation Matters

AI is driving the largest infrastructure deployment the world has ever seen. Thousands of organizations are designing, ordering, and commissioning GPU clusters simultaneously. These deployments require multiple technology components, such as network orchestration, automation, multi-tenancy, Kubernetes orchestration, storage, security, observability, and other ecosystem partner solutions that all need to be integrated with NVIDIA hardware and software and with each other.

The traditional technology rollout lifecycle of demo, trial, validation, and pre-deployment can no longer depend on physical labs. Labs are too limited in scale and typically booked, with long queues of AI operators waiting. At the same time, operators who have already ordered hardware and are waiting for delivery cannot spend precious time learning on their own equipment. They need to evaluate and validate the technology before the hardware arrives, so once the expensive hardware is there, it goes live immediately.

Netris in the AI Infrastructure Stack

Modern AI infrastructure is not a single network. It consists of multiple independent networking fabrics operating simultaneously: North-South Ethernet fabrics with and without DPUs, East-West back-end fabrics such as NVIDIA Spectrum-X Ethernet for GPU-to-GPU communication, and rack-scale NVLink interconnect domains. Each uses distinct technologies and control mechanisms, yet they must be orchestrated, validated, and operated as a cohesive system. Netris provides this orchestration.

Netris is a critical component of this stack, providing network automation, abstraction, and multi-tenancy across all of these independent traffic fabrics. Netris deploys as a complete system inside DSX Air. The same Netris controller that runs inside DSX Air moves directly into the live environment at deployment. If discrepancies exist between the digital twin and physical hardware — wrong cabling, topology mismatches — Netris immediately detects them and guides engineers on the ground to fix the issues.

Netris is the leading provider of network automation and multi-tenancy for AI infrastructure. The Netris NAAM (Network Automation, Abstraction, and Multi-Tenancy) platform is purpose-built for GPU clouds and AI Factories, trusted by high-growth neoclouds, sovereign AI cloud providers, and leading system integrators worldwide. Netris captured 12 percent of neoclouds worldwide in the last 10 months, with every deployment live across 20+ data centers.

How Customers and Partners Use DSX Air Across the Deployment Lifecycle

ISV Integration. AI deployments require multiple technology components that need integrations with NVIDIA and with each other. NVIDIA and Netris use DSX Air with ecosystem partners to develop and validate these integrations ahead of customer deployments.

Marketing Technology News: MarTech Interview With Fredrik Skantze, CEO and Co-founder of Funnel

Evaluation and Proof of Concept. AI operators evaluating solutions want to see demos, deep-dive into configurations, and kick the tires. Assembling physical hardware and tens of thousands of cables for an evaluation is not practical. DSX Air simulates realistic environments in minutes. ISVs and evaluating customer engineers collaborate inside DSX Sim instead of waiting months for physical lab availability.

Training Before Hardware Delivery. Once an operator signs a deal for hardware and software, they use DSX Air to stand up a simulation of their upcoming AI infrastructure running the exact anticipated software stack. Teams learn the system, ask questions to vendors ahead of time, and discover issues before hardware arrives, so once the mission-critical hardware is delivered, they go live and start making money sooner.

Pre-Deployment (Day 0). Complex AI infrastructure is deployed by skilled implementation teams that work together. These teams collaborate and validate anticipated configurations before hardware delivery. An exact, detailed digital twin is deployed in DSX Air, and implementation teams collaborate on a realistic simulation to identify issues in advance. Once hardware is delivered and powered on, the Netris controller moves from DSX Air into the live environment.

Marketing Technology News: The Death of Third-Party Cookies Was Just the Start. Are You Ready for Consent Orchestration?

Go-Live (Day 1). Equipment powers on and teams execute with pre-validated configurations. Bring-up cycles compress because the full ecosystem has already been exercised together in simulation.

Ongoing Operations (Day 2). AI operators are live and making money. However, there are always day-2 changes — software upgrades, topology extensions, tenant policy changes, new ideas to experiment with. Instead of testing on live hardware, which should be running paid customer workloads, operators use a digital twin of their infrastructure in DSX Air to rehearse changes before applying them to the live environment.

“When mission-critical GPU hardware arrives, it should go live ASAP and not sit idle while engineers figure things out,” said Alex Saroyan, CEO of Netris. “DSX Air allows our customers and partners to develop, integrate, and validate the complete ecosystem before systems are on site. With Netris, operators can instantly provision tenants, reallocate GPU capacity, and scale their business from day one, with confidence that the entire stack works in concert.”

“The race to build AI factories requires a shift from manual configuration to a fully simulated, automated lifecycle that ensures the network is ready the moment GPUs are powered on,” said Amit Katz, VP of Networking at NVIDIA. “Netris integration with NVIDIA DSX Air provides a powerful environment to architect and validate the entire networking stack in parallel with hardware delivery, drastically accelerating time to value.”

Write in to psen@itechseries.com to learn more about our exclusive editorial packages and programs.

Aurora Mobile Establishes Japanese Subsidiary to Redefine AI-Powered Customer Interaction and Security

0
Aurora Mobile Establishes Japanese Subsidiary to Redefine AI-Powered Customer Interaction and Security

Aurora Mobile Limited, a leading provider of customer engagement and marketing technology services, announced the official establishment of its Japanese subsidiary, Aurora Mobile Japan K.K.

This strategic expansion brings the Company’s flagship platform, EngageLab, directly to the Japanese market. By establishing a local presence, Aurora Mobile Japan K.K. is poised to empower enterprises with a Full-Journey AI Engagement ecosystem that seamlessly integrates Omnichannel Interaction, Customer Service AI Agent, and Frictionless Security.

Optimizing Global Operations with Unified Intelligence
As Japanese enterprises continue to strengthen their global influence, the need for efficiency and ROI in customer lifecycle management has never been higher. Aurora Mobile Japan K.K. enters the market to solve the fragmentation of customer data and channels, offering a unified solution that drives exponential growth.

“Japan is a market that demands precision, stability, and innovation. We are honored to bring our globally proven infrastructure here,” said Mr. Weidong Luo, Chairman and Chief Executive Officer of Aurora Mobile. “With Aurora Mobile Japan K.K., we are delivering a clear advantage: the ability to connect smarter and convert faster. Whether it is through invisible security verification or AI agents that slash operational costs, we provide the digital infrastructure that Japanese businesses need to maximize their global impact.”

Marketing Technology News: MarTech Interview With Fredrik Skantze, CEO and Co-founder of Funnel

Three Pillars of the Aurora Advantage
Aurora Mobile Japan K.K. introduces a modular solution suite designed to optimize customer experience with maximum ROI:
1. Smart Omnichannel Interaction & Marketing Automation
Moving beyond simple delivery, EngageLab transforms passive communication into conversational selling. The platform seamlessly integrates AppPush, WebPush, Email, SMS, and WhatsApp into a unified system capable of delivering billions of notifications in milliseconds. Specifically, our proprietary channel optimization technology for AppPush achieves a 1.4X higher delivery success rate compared to industry standards.

  • Visual Journey Builder: Marketers can use drag-and-drop tools to reach customers through optimal channels at minimal cost.
  • Smart Algorithms: Features like peak-time delivery for Email and proprietary channel optimization ensure that every interaction counts.

2. Advanced AI Customer Support
Addressing the critical labor shortage and efficiency challenges in Japan, the Company introduces LiveDesk, a solution that breaks free from traditional seat limits.

  • AI-Human Collaboration: Advanced AI Agents independently handle up to 90% of customer inquiries, slashing operational costs by 70% while elevating service quality. The system enables seamless handoff between AI and human agents, ensuring precise and effective resolution of complex customer inquiries.

3. Frictionless Security & Identity
Security is the foundation of digital trust. Aurora Mobile Japan K.K. offers a suite of AI-powered protection tools that users do not hate:

  • Effortless Verification: The platform features Silent Auth, which enables instant, zero-friction user validation. This technology streamlines the user access process, granting genuine users instant, uninterrupted access to services.
  • Intelligent Defense: This is complemented by OTP services with a 99% success rate and AI-Powered CAPTCHA, which instantly blocks bots while allowing real users to sail through with minimum friction.

Local Team & Office
Aurora Mobile Japan K.K. is fully operational with a dedicated local team to provide tailored consulting and technical support to Japanese clients.

Marketing Technology News: The Death of Third-Party Cookies Was Just the Start. Are You Ready for Consent Orchestration?

Write in to psen@itechseries.com to learn more about our exclusive editorial packages and programs.

ZoomInfo Leads in Evaluation of B2B Marketing and Sales Data Providers for B2B

0
Multiply raises $9.5m for self-learning ads, reports 300%-500% pipeline increase for B2B companies

ZoomInfo Logo

Key Findings:

  • Highest Current Offering Category Score & Highest Scores in 20 of 27 Criteria

  • ZoomInfo is a top option for organizations seeking a full ecosystem provider

  • GTM knowledge graph for data discovery and agentic AI use cases noteworthy

ZoomInfo , the Go-To-Market (GTM) Intelligence Platform, announced it has been named a Leader in The Forrester Wave™: Marketing and Sales Data Providers for B2B, Q1 2026. Forrester’s report noted that ZoomInfo is “setting a technology standard for data collection and identity resolution” and that “the ongoing development of a GTM knowledge graph to support data discovery and agentic AI use cases is noteworthy…” The report’s vendor profile for ZoomInfo concluded: “ZoomInfo is a top option for organizations seeking a full ecosystem provider across sales and marketing.”

Marketing Technology News: MarTech Interview With Fredrik Skantze, CEO and Co-founder of Funnel

“Forrester’s report has confirmed what our customers experience every day: we are the data backbone that modern GTM runs on,” said Henry Schuck, CEO and Founder of ZoomInfo.

ZoomInfo received:

  • Highest current offering category score among all evaluated vendors.
  • Highest possible scores in 20 of 27 criteria including data foundation, platform and ecosystem.
  • Highest possible scores across 4 criteria evaluated within the strategy category: Vision, Innovation, Partner Ecosystem, and Supporting Services and Offerings.

Marketing Technology News: The Death of Third-Party Cookies Was Just the Start. Are You Ready for Consent Orchestration?

According to The Forrester Wave™: Marketing and Sales Data Providers for B2B, Q1 2026, ZoomInfo “has entrenched itself as the default data provider for B2B sales … with broader ambitions spanning all go-to-market functions…” The report further states that “ZoomInfo’s vision and innovation stand out, highlighted by first-to-market genAI capabilities for data capture and insight generation and backed by the largest R&D budget among those willing to disclose that investment.” ZoomInfo also sees its strategy category score as a reflection of the company’s ongoing investment in its GTM knowledge graph — an infrastructure layer designed to support data discovery and agentic AI use cases

“In an era where AI promises everything but delivers only as much as the data beneath it, ZoomInfo clearly set itself far apart from the competition; to us, Forrester’s report has confirmed what our customers experience every day: we are the data backbone that modern GTM runs on,” said Henry Schuck, CEO and Founder of ZoomInfo. “We spend nearly $200 million a year on innovation and our data foundation. We are not just keeping pace with where GTM is headed, we believe this recognition confirms that we are setting the standard for it.”

Write in to psen@itechseries.com to learn more about our exclusive editorial packages and programs.

digna 2026.01 Expands Enterprise Data Validation Inside the Database

0
Data Intelligence with High-Performance Data Access & Orchestration for Fastest Time to Value of AI Production

digna logo

New release adds multi-column uniqueness and referential integrity checks, enabling enterprise data validation directly inside source databases.

digna has announced Release 2026.01 of its data quality and observability platform, introducing expanded data validation capabilities designed to operate directly within enterprise databases without requiring data extraction or replication.

Our approach moves the validation logic to where the data already lives. This allows teams to enforce sophisticated data quality rules without introducing additional data movement.”

— Marcin Chudeusz

The new release introduces advanced validation features including multi-column uniqueness checks and referential integrity validation, enabling organizations to enforce complex structural and relational data quality rules directly where data is stored.

As enterprise data environments continue to scale across warehouses, lakes, and operational systems, organizations increasingly face challenges ensuring that data remains consistent and trustworthy without adding additional processing layers or moving large volumes of data outside their infrastructure.

Marketing Technology News: MarTech Interview with Christopher P Willis, CMO at Acrolinx

The latest update to digna addresses this challenge by expanding its in-database validation architecture, allowing validation logic to run within the source database through SQL-based inspections rather than exporting datasets into external data quality engines.

According to the company, this approach reduces operational complexity while helping organizations maintain stronger control over sensitive enterprise data.

“Many traditional data quality tools require exporting large datasets before checks can be performed,” said Marcin Chudeusz, CEO of digna. “Our approach moves the validation logic to where the data already lives. This allows teams to enforce sophisticated data quality rules without introducing additional data movement.”

Expanded Validation Capabilities

Release 2026.01 introduces several enhancements to the digna Data Validation module, expanding the types of data integrity rules organizations can enforce across their environments.

One of the key additions is multi-column uniqueness validation, which allows teams to verify compound business keys across datasets. Many real-world business entities rely on combinations of attributes—such as account identifiers, product codes, or timestamps—to define uniqueness. Traditional single-column checks cannot detect duplicates within these compound relationships.

Marketing Technology News: Future of Retail Martech Part 1: Personalisation and AI

The new functionality enables validation of configurable column sets, helping identify duplicate business entities that may otherwise remain undetected in complex analytical systems.

The release also introduces referential integrity checks designed to validate relationships between datasets. These checks ensure that foreign key values in one datasource exist within a referenced datasource, helping detect orphaned records and broken relationships that can undermine downstream analytics and reporting.

The integrity checks support validation across multiple database environments, including different schemas, tables, views, or database connections within the same project.

These validation mechanisms are intended to support common enterprise scenarios such as:

maintaining data warehouse integrity

validating master data relationships

supporting regulatory reporting

improving reliability of downstream analytics and BI systems

Validation Without Data Movement

A distinguishing aspect of the platform’s architecture is that validation runs directly within the source database.

Instead of extracting data into external processing environments, digna executes SQL-based inspections through database interfaces and evaluates the resulting metrics externally. This design allows organizations to monitor data quality without copying datasets or creating additional storage layers.

The company states that this approach is particularly relevant for enterprises operating in regulated sectors where data residency, governance, and operational control are critical considerations.

“Enterprises increasingly want data quality capabilities that integrate with their existing platforms rather than requiring additional infrastructure,” said Danijel Kivaranovic, PhD, CTO of digna. “Running validation directly in the source database helps maintain governance and reduces unnecessary system complexity.”

Supporting Complex Enterprise Data Environments

In addition to the expanded validation coverage, Release 2026.01 also introduces improvements to datasource modeling and connection management designed to support heterogeneous enterprise data landscapes.

The update includes global database connections, logical datasources, and the ability for projects to reference multiple source connections. These enhancements are intended to simplify configuration across environments where data resides in multiple warehouses or databases.

Together, the new features aim to make data quality operations easier to maintain as enterprise data architectures evolve.

Write in to psen@itechseries.com to learn more about our exclusive editorial packages and programs.

Hosted.com Examines Prompt Injection Threats Affecting Websites Using AI

0
Hosted.com Examines Prompt Injection Threats Affecting Websites Using AI

Hosted.com®

Hosted.com examines the growing risk of prompt injection attacks to businesses using AI tools, including their potential impact, and ways to reduce exposure.

Hosted.com has released a new article explaining the rise of prompt injection attacks and their implications for businesses that rely on Artificial Intelligence (AI) for their websites, automation, and backend tasks. It outlines how these attacks work, the risks they pose, and the security measures to help prevent and mitigate them.

Businesses rely on AI more than ever. When misused, risks go beyond technical issues. Understanding threats and using layered security helps prevent prompt injection and other AI attacks.”

— Wayne Diamond

The Growing Role of AI in Business Operations
AI is increasingly integrated into online businesses for customer communication and support, content generation, analytics, and automation. This means models interact with and train on User-Generated Content (UGC), downloaded files, databases, and external sources, which may contain harmful prompts.

While traditional cyber threats often target system vulnerabilities or login credentials, prompt injection attacks focus on influencing how AI models act. These attacks are designed to manipulate behavior rather than exploit conventional security gaps.

Marketing Technology News: MarTech Interview with Christopher P Willis, CMO at Acrolinx

How Prompt Injection Attacks Work
Prompt injection attacks involve embedding malicious instructions into data. These instructions may be hidden in form submissions, documents, website content, or links. When processed by Large Language Models (LLMs), the injected prompts can cause AI tools to override built-in safeguards.

Prompt injections can be used to expose sensitive information, perform unauthorized actions, generate misleading outputs, or assist in phishing to gain access to admin and banking accounts. Because they rely on manipulating AI rather than attacking software directly, detecting and preventing them can be difficult using traditional security methods alone.

The Risks for Online Businesses
For businesses that rely on AI to process customer data or automate workflows, prompt injection attacks present several risks. These include potential data exposure and theft, unauthorized changes to site content, and admin account takeovers.

This can, in turn, impact customer trust and business continuity. Security incidents involving AI systems may also lead to regulatory or legal issues, especially when sensitive or personal data is involved.

Marketing Technology News: Future of Retail Martech Part 1: Personalisation and AI

Infrastructure-Level Protection
Hosted.com’s article explains several infrastructure-level methods used to reduce exposure to prompt injection and related AI cyberattacks. These measures focus on identifying suspicious behavior before manipulated inputs are processed.

Comment sections, forms, and file upload areas are frequent entry points for manipulated inputs. Server-level file scanning can detect malicious scripts or embedded prompts in downloads and uploads.

Monitoring software can also identify unusual activity patterns that may indicate tampering during script execution. Request filtering can flag suspicious inputs before they reach websites or AI tools.

Traffic Filtering and Isolation
Web Application Firewalls (WAFs) provide an additional layer of protection by filtering inbound traffic and blocking anomalous requests from suspected AI bots.

Website isolation technologies further reduce risk by limiting the impact of a compromised file or script on other sites on the same server. By separating sites, isolation tools help prevent AI-related attacks on one site from spreading across the server.

According to Wayne Diamond, CEO of Hosted.com, “AI tools are used by businesses to operate and serve customers more than ever. When those tools are misused, the damage extends beyond technical issues. Understanding the risks and applying layered security helps prevent prompt injection attacks and other AI-based threats.”

Best Practices to Reduce Risk
In addition to Web Hosting infrastructure security, Hosted.com’s article covers best practices to reduce exposure to prompt injection attacks. These involve restricting AI permissions to essential functions, reviewing user-generated content before processing, and ensuring human oversight for sensitive tasks.

Monitoring for unusual behavior can also help identify potential manipulation early. While no single control can eliminate risk entirely, layered security, combined with operational awareness, can reduce both the likelihood and the impact of AI-related incidents.

Prompt injection attacks are continually evolving as AI advances, requiring security measures to adapt to emerging AI manipulation methods.

Write in to psen@itechseries.com to learn more about our exclusive editorial packages and programs.

Teneo and Thoughtworks Launch New AI-Focused Joint Venture

0
Teneo and Thoughtworks Launch New AI-Focused Joint Venture

New joint venture to help companies manage business transformation in the AI and agentic age

The venture combines boardroom strategy, corporate advisory services, AI and software engineering services and the AI/works™ agentic development platform to deliver measurable business impact at speed

Will help companies seize AI-driven market opportunities while managing risk, security, ethics and governance 

  • Aligns Teneo’s 1,800-plus C-suite and Board advisors and deep network of global relationships with Thoughtworks’ 10,000-plus engineers and AI delivery expertise

  • Builds new services designed for the C-suite and Board agenda with teams of executive advisers, engineers, data and AI specialists

  • Leverages Thoughtworks’ proprietary agentic development platform to move companies from AI strategy to execution in weeks, not years

Teneo, the global CEO advisory firm, and Thoughtworks, a global technology consultancy that integrates design, engineering and AI, announced the launch of a new AI-focused venture designed to help companies manage business transformation and turn AI ambition into measurable business outcomes.

As AI reshapes every industry, technology is at the heart of how every business runs. This joint venture is built for this new era. It redefines senior strategic advice as AI-native with the ability to design, build and run AI-powered platforms, so insight turns into action and strategy becomes operational across the enterprise.

The venture brings together Teneo’s trusted advisory and global client network with Thoughtworks’ more than 10,000 engineers and deep expertise in design, product engineering, data and AI. The goal is clear: help companies react in real time and capture value from AI investments that deliver business outcomes in weeks and months, not years.

Marketing Technology News: MarTech Interview with Reshma Iyer, Algolia’s Head of Product Marketing and Ecommerce

As companies invest heavily into AI infrastructure and modernization, many struggle to translate ambition into results. The new joint venture is built to close that gap. It will work directly with CEOs and executive teams to align strategy, operating model and technology execution across growth, productivity, modernization, risk, talent and reputation.

“As CEOs navigate unprecedented macroeconomic, geopolitical and technological disruption, they must also become the primary architects of their companies’ AI futures,” said Paul Keary, CEO of Teneo. “Teneo has invested hundreds of millions of dollars into world-class talent and capabilities to develop the leading global CEO advisory firm. Now in partnership with Thoughtworks, we will bring to bear Teneo’s services with Thoughtworks’ AI leadership and deep engineering capability to help our clients navigate business transformation in the AI and agentic age.”

“AI transformation only works when strategy, culture and execution move together,” Mike Sutcliff, CEO of Thoughtworks, said. “This venture unites CEO-level advisory with hands-on engineering and AI delivery. In three days, we can help clients align on new product concepts. In three weeks, we can build a working prototype. In three months, we can put new systems into production. That is the pace today’s market demands.”

Marketing Technology News: An Analysis of 2023’s Can’t-Miss Ad Opportunity

Built for CEO priorities

The joint venture will build new, integrated services designed specifically for the CEO agenda. Operating with multidisciplinary teams of executive advisers, product leaders, engineers, designers and data and AI specialists, it will create solutions that unite strategy and execution from day one. Services will include:

  • Accelerating CEO priorities tied to growth, productivity, modernization and resilience
  • Scaling enterprise AI programs from strategy through deployment, including generative AI and advanced analytics
  • Strengthening stakeholder trust through AI-powered insights and engagement across investors, regulators, governments and customers
  • Managing geopolitical and market risk through real-time monitoring and scenario planning
  • Supporting financial resilience with digital tools to improve liquidity and guide restructuring
  • Transforming customer and employee experiences through modern product platforms
  • Redesigning operating models and leading enterprise change at scale

“As companies navigate rapid shifts in technology and capital markets, AI represents both opportunity and disruption,” said Alex Pigliucci, Teneo’s Global Head of Enterprise Clients, who will lead the joint venture. “This partnership equips our advisory teams, and our clients, with the ability to combine strategy and execution in real time, particularly in critical moments such as acquisitions, restructurings and large-scale transformations.”

The joint venture will be established in New York, with global hubs across the Americas, Europe, Middle East and Asia-Pacific. It begins operations immediately, supported by Thoughtworks’ ecosystem of leading technology partners including Amazon Web Services (AWS), Google, NVIDIA, Microsoft, Databricks and Mechanical Orchard to bring these new services to market at speed.

Write in to psen@itechseries.com to learn more about our exclusive editorial packages and programs.