Network Break: FU on Intel Drivers, FISA Outside the US, and More!

Network Break! This week we start with some FU on Intel drivers, and how FISA affects people outside (and inside) the US.

In the news we cover Intel's rollout of new XPU silicon and associated software as it tries to make up ground against Nvidia's AI dominance, Zscaler's acquisition of a microsegmentation startup that gives Zscaler a foothold inside enterprise LANs, and Juniper's five-year anniversary of its Mist acquisition.

Forks of open source software including Redis and Terraform raise questions about open source as a business model, and TSMC and the US government come to a preliminary understanding over US funding for TSMA fabs in Arizona.

Concerns are being raised about having enough employees to staff new chip-making plants in the United States, and the US government wants to close a tax loophole that lets private space companies avoid paying for the shared costs of clearing airspace for rocket launches.

Sponsor: Gcore has introduced FastEdge, a new product in low-latency edge computing for serverless app deployment built on WebAssembly. With FastEdge, your code runs closer to your users than ever before, leveraging a global CDN network of over 160 data centers to ensure lightning-fast performance. You can join the FastEdge beta today and transform how your applications are delivered. Visit to learn more.

Tech Bytes: Fortinet We also have a Tech Bytes podcast with sponsor Fortinet, where we talk about the push and pull between point security solutions and platforms.

Drew Conry-Murray (00:00:00) – Take a network break. Join us for our weekly speed run through the week's IT news. We're going to talk about a new GPU from Intel, a security acquisition by Zscaler that aims to do away with NAC, continuing US investment in chip manufacturing and more tech news.

Drew Conry-Murray (00:00:40) – We've also got a Tech Bytes podcast with sponsor Fortinet, where we talk about the push and pull between point security solutions and platforms.

Greg Ferro (00:01:07) – We were talking about software and how important the software frameworks are in the context of AI. So we're talking about Nvidia's CUDA frameworks, basically the CUDA frameworks. And he's saying that, you know, graphics drivers are hard and cards are often shipped released with incomplete drivers. It's just that Nvidia and AMD have a lot of experience with this. A large pile of money, test gear, and relationships with game devs to polish it a bit more often before a graphics card release. Still, a new GPU architecture is often pretty rough, even for those companies. GPU drivers are hard. Intel is playing catch-up, but they continue to work at it. I'm not usually an Intel fanboy, but this is the most seriously they've taken graphics. I'll give them that.

Greg Ferro (00:01:44) – So he's making a point here is that, vendors in the GPU space often shipped incomplete and faulty products is basically what he's saying. And once it's out there in the public, they wait for customers to tell them what's wrong, and then they rapidly iterate on it and fix it in a in a period of time, which is the time-honored tradition in technology, the customers always take the hit. They expect to buy faulty products and they expect to live with them. So that's just the way it is. But I would also make the counterpoint that we are at least two decades into GPU development and GPU products, and they've been mature for several years. Realistically, that should not happen. The GPUs that we're now seeing as AI processers or as they're now being renamed XPUs. So we're seeing a trend where Intel and Broadcom are now calling them XPUs. When you create an AI processor, because it could be many different ways around, you could either have a GPU or TPU or some other type of thing. Sure, XPU seems to be a convention that they're sort of converging on, but it's much more than just drivers.

Greg Ferro (00:02:40) – It's also about the software frameworks that bring developers to the platform. So for example, PyTorch, very popular machine learning library. Of course, machine learning is now just called AI for because of whatever. And Intel has shipped extensions which must be installed to get the maximum performance for Gaudi 3. So you have to go and buy get the Gaudi 3 extensions or the Gaudi 2 extensions and then add them to your Python installation so that you can then get the accelerated processing so the hardware acceleration is in place. And but if you go and have a look at, say, Intel support for Jupyter Notebooks, which is another machine learning engine, the support for that is not quite

Read more