OpenAI seeks to automate ‘computer use’ for Macs in the enterprise 24 Oct 2025, 11:41 am

While AI bots have begun mastering tasks in browsers and on Windows, Mac-using enterprises have largely been overlooked, until now. OpenAI aims to change that with its acquisition of generative AI interface maker Software Applications Incorporated.

The base of this integration is Sky, a generative AI-based, natural language-input compatible assistant for macOS that the San Francisco-headquartered startup has been developing to help users automate various tasks.

“Whether you’re chatting, writing, planning, or coding, Sky understands what’s on your screen and can take action using your apps,” the startup wrote on its portal describing Sky.

Giving AI control of the OS

The idea of automating tasks for desktop users is not entirely novel. Last year in October, Anthropic became the first LLM provider to showcase the possibility of controlling a computer or some parts of its operating system.

That ability, which Anthropic had termed “computer use,” enabled developers to instruct Claude 3.5 Sonnet, through the Anthropic API, to read and interpret what’s on the display, type text, move the cursor, click buttons, and switch between windows or applications.

It caught the attention of experts and enterprises as the ability was a major step up from more traditional automation practices, such as robotic process automation (RPA) tools, which required more time and labor to set up and yet would require constant maintenance.

Another issue with RPA tools was that enterprise users or developers would have to change the code or script as the interface of the operating system changed. In contrast, Anthropic’s ability demonstrated that LLMs can understand what they are looking at, eliminating the need to change scripts as interfaces change.

Just days after Anthropic’s announcement, Google also entered the AI-based computer use fray by showcasing Jarvis, an offering designed to automate tasks such as research and shopping within the Chrome browser with the help of the company’s Gemini 2.0 LLM.

Around the same time, OpenAI reportedly revealed that it had been working on a similar capability since February last year.

The acquisition of Sky and its integration into ChatGPT, according to Forrester principal analyst Charlie Dai, is OpenAI’s significant step towards gaining a sizeable share of the nascent yet evolving AI-based automation market, driven by agentic AI.

OpenAI is likely to market use cases that involve automating workflows across apps, coding assistance, and integrating with collaboration tools for increased productivity, Dai said, adding that the company is targeting macOS as it is popular among developers and creative professionals, giving it a sizeable customer base.

Sky’s integration into ChatGPT is not the only product that OpenAI has as part of its macOS footprint.

Just last week, it launched ChatGPT Atlas — a web browser with ChatGPT built in — designed to automate tasks like bookings directly within the browser window, echoing Google’s Jarvis.

OpenAI is expected to release Atlas for Windows, iOS, and Android in the future. Microsoft, OpenAI’s close partner, has introduced similar capabilities for Windows via Copilot Mode in its Edge browser.

(image/jpeg; 14.19 MB)

The day the cloud went dark 24 Oct 2025, 9:00 am

This week, the impossible happened—again. Amazon Web Services, the backbone of the digital economy and the world’s largest cloud provider, suffered a large-scale outage. If you work in IT or depend on cloud services, you didn’t need a news alert to know something was wrong. Productivity ground to a halt, websites failed to load, business systems stalled, and the hum of global commerce was silenced, if only for a few hours. The impact was immediate and severe, affecting everything from e-commerce giants to startups, including my own consulting business.

A quick scan of AWS’s status page confirmed that regions across the United States and Europe were reporting degraded service. Calls poured in from clients who were desperate for updates. They had invoices that couldn’t be processed, schedules that crumbled into digital dust, and so much more. I estimated over $3,000 in lost productivity for my small business alone, which is nothing compared to what some of the Fortune 500s probably faced. The real cost to businesses worldwide will likely run deep into the millions.

How and why this happened

When the screens froze and alerts began to flood in, my first thought was: Is this an accident or an attack? AWS engineering is still actively investigating the root cause. Early signs suggest a misconfiguration in network management systems during a routine infrastructure scaling process. As demand for cloud resources keeps rising—driven by everything from growing enterprise SaaS adoption to generative AI training workloads—cloud providers need to continually expand and improve their physical infrastructure. In this specific incident, a change that should have been routine caused critical routing hardware to fail, leading to a ripple effect across multiple AWS availability zones.

AWS responded quickly, rolling back changes and isolating affected components. Communications from AWS Support, while timely, were predictably technical and lacked specifics as the crisis developed. Issues with autoscaling, load balancing, and traffic routing caused downstream effects on seemingly unrelated services. It’s a reminder that, despite the focus on “resilience” and “availability zones,” cloud infrastructure is still subject to the same fundamental laws of physics and software vulnerabilities, just like anything in your own data center.

The final resolution came a few hours later, after network engineers manually rebalanced the distributed systems and verified the restoration of normal operations. Connectivity returned, but some customers reported data inconsistencies, delayed API recoveries, and slow catch-up times. The scramble to communicate with clients, reset processes, and work through the backlog served as a harsh reminder: Business continuity depends on more than hope and a robust marketing pitch from your provider.

The myth of the bulletproof SLA

Some businesses hoped for immediate remedies from AWS’s legendary service-level agreements. Here’s the reality: SLA credits are cold comfort when your revenue pipeline is in freefall. The truth that every CIO has faced at least once is that even industry-leading SLAs rarely compensate for the true cost of downtime. They don’t make up for lost opportunities, damaged reputations, or the stress on your teams. As regional outages increase due to the growth of hyperscale cloud data centers, each struggling to handle the surge in AI-driven demand, the safety net is becoming less dependable.

What’s causing this increasing fragility? The cloud isn’t a single, uniform entity. Each expansion, new data center, and technology update adds to the complexity of routing infrastructure, physical connections, and downstream dependencies. AI and machine learning workloads are known for their high compute and storage needs. Their growth only heightens pressure on these systems. Rising demand pushes operational limits, exposing cracks in an infrastructure meant to be invisible and seamless.

Be ready for the next outage

This outage is a wake-up call. Headlines will fade, and AWS (and its competitors) will keep promising ever-improving reliability. Just don’t forget the lesson: No matter how many “nines” your provider promises, true business resilience starts inside your own walls. Enterprises must take matters into their own hands to avoid existential risk the next time lightning strikes.

First, invest in multicloud and hybrid architectures. Relying on a single provider—no matter how big—means putting all your eggs in one basket. By designing applications to be portable across clouds (AWS, Azure, Google Cloud, or even on-premises systems), businesses can switch to a secondary platform if disaster strikes. Yes, it’s complex. Yes, it costs extra. But compared to a multimillion-dollar outage, it’s a decision that pays off.

Second, automate both detection and response processes. The speed of detection and response determines who weathers the storm and who capsizes. Automated monitoring must go beyond simple system health checks to include application-level functionality and business KPIs. Systems should trigger alerts and execute runbooks that attempt recovery or at least gracefully degrade service. Human reaction time is measured in minutes; cloud failures occur in seconds.

Third, don’t just write disaster recovery plans, rehearse them. Business continuity is only possible if tested under realistic conditions. Enterprises that regularly simulate cloud outages—shutting off services, rerouting traffic, and even introducing chaos—are the best prepared when the real disaster hits. The muscle memory built during drills makes all the difference when the stakes are high. Staff shouldn’t be learning the playbook in real time.

We’ll spend weeks estimating the lost productivity from this AWS outage. For many, the cost will be great and the lessons learned too late. The only certainty is that this disruption won’t be the last. As the global digital economy expands and AI demands more bandwidth and computing power, outages are likely to become more frequent. As technologists and business leaders, we can hope for greater transparency and better tools from our cloud partners, but our best defense is a proactive resilience plan. The cloud is the future, but we must weather both stormy and sunny days.

(image/jpeg; 0.5 MB)

Python has a friend in Rust 24 Oct 2025, 9:00 am

Python 3.14 is here, and it’s bursting with goodies! Plus, Rust is making it easier to redistribute Python apps, PDM is simplifying Python package management, and Java is challenging Python for developing AI applications.

Top picks for Python readers on InfoWorld

The best new features and fixes in Python 3.14
Welcome to the biggest new Python release in years, with official support for free-threaded (“no-GIL”) Python, a revamped installation manager for Windows, template strings, and so much more.

Java or Python for building agents?
Python may be the default choice for building AI, but if your team and culture are rooted in Java, stick with it—the tools are falling into place.

PyCrucible: An easy way to redistribute your Python apps
Make Python apps into single-file, click-to-run executables with Rust-powered PyCrucible. Almost no additional configuration or changes necessary.

PDM: A smarter way to manage Python packages
Get a handle on project dependencies, and even run Python scripts with inline package information, via Python Development Master, or PDM for short.

More good reads and Python updates elsewhere

PEP 810: Explicitly lazy imports
A proposal for a native Python mechanism to allow imports to be lazily resolved, or loaded only when the imported name is actually used for the first time. It is still in the early stages, but looks promising.

Why it took 4 years to get a lock files specification
The inside story of the long, hard, multi-year road to create a lockfile format for Python. Spoiler: Legacy workflows make major changes difficult.

Python 3.14 performance looking good in benchmarks: Michael Larabel
The founder of Phoronix runs the hard numbers behind the performance of Python 3.14 compared to its predecessors. Some individual tests run notably slower (garbage collection), but the overall trend is faster.

Translating Cython to Mojo, a first attempt
What you can expect if you translate Cython (well-established, but sometimes clunky) into Mojo (new, fast, but also relatively untested).

(image/jpeg; 7.49 MB)

Next.js 16 features explicit caching, AI-powered debugging 24 Oct 2025, 9:00 am

Vercel’s React framework for building full-stack web applications gets an upgrade in Next.js 16, with more explicit caching and AI-based debugging, among other features.

Announced October 21, Next.js 16 now is generally available, with installation instructions available at nextjs.org. Highlights include Cache Components, a new set of features designed to make caching both more explicit and more flexible. The new caching capabilities stem from a "use cache" directive that can be used to cache pages, components, and functions. Cache Components leverages the compiler to generate cache keys wherever they are used.

Next.js 16 also introduces Next.js DevTools MCP, a Model Context Protocol integration for AI-assisted debugging with contextual insight. AI agents are provided with routing, caching, and rendering behavior as well as unified logs, automatic error access, and page awareness. This enables agents to diagnose issues, explain behavior, and suggest fixes within a development workflow.

Following the stable release of Turbopack in Next.js 15 last year, Next.js 16 elevates Turbopack to the default bundler for new projects, though applications with a custom webpack setup can continue using webpack. With Turbopack, a Rust-based incremental bundler optimized for JavaScript and TypeScript, developers can expect faster production builds and an even faster Fast Refresh, Vercel said. Turbopack also now supports file system caching as a beta capability for storing compiler artifacts on disk between runs. This results in significantly faster compile times across restarts, particularly in large projects, said the company.

Built-in support for the React compiler is now stable in Next.js 16. The React compiler automatically memoizes components, reducing unnecessary re-renders with zero manual code changes. The reactCompiler configuration option has been promoted from experimental to stable, but is not yet enabled by default. Another highlight is the overhauled routing and navigation system, making page transitions leaner and faster, according to Vercel.

Next.js 16 includes the following additional features and improvements:

  • updateTag()is a new server actions-only API that provides read-your-writes semantics, expiring and immediately reading fresh data within the same request. This ensures interactive features reflect changes immediately.
  • The App Router uses the latest React Canary release, which includes React 19.2 features and other features being incrementally stabilized. Highlights include view transitions, to animate elements that update inside a transition or navigation, and rendering of “background activity” by hiding the UI with display: none while maintaining state and cleaning up effects.
  • An alpha version of the Build Adapters API is featured. Build Adapters is used to create custom adapters that hook into the build process, enabling deployment platforms and custom build integrations to modify Next.js configuration or process build output.

(image/jpeg; 1.93 MB)

Using the SkiaSharp graphics library in .NET 23 Oct 2025, 9:00 am

The news that the .NET UI framework Uno Platform project would be upstreaming features and fixes into the core multiplatform .NET libraries makes a lot of sense. The open source project has been working on cross-device .NET features for some time now, and deeper involvement with Microsoft on elements of .NET MAUI for Android and iOS builds on that existing relationship.

One of the interesting parts of the announcement was the news that Uno would begin co-maintaining the SkiaSharp project with Microsoft. This project is an important part of delivering cross-platform, portable .NET code, but most people don’t know about it as it’s part of the plumbing and hidden underneath libraries and components.

So, what is SkiaSharp and how can you use it in your own code?

Introducing SkiaSharp

Originally developed by Skia before being bought by Google in 2005, the once-proprietary 2D graphics library was released under a BSD license in 2008. Since then, the project has become a key element of many open source graphics systems, used in the Chromium browser engine, Mozilla’s tools, and Libre Office.

In Windows development, both Uno and Avalonia use it as a cross-platform drawing engine, helping your code work on desktop and on mobile devices. Versions are available for most common operating systems, including Windows, iOS, macOS, Linux, and Android.

The Skia 2D graphics library focuses on drawing and targeting most of the common back ends. As it is used as a common abstraction for different graphics APIs, it’s an ideal technology to help deliver cross-platform graphics operations, allowing code to produce the same visual output no matter the underlying display technology.

If you’re building C++ applications, you can work with a suitable port of Skia directly. But most of us use higher-level programming languages, like C#, along with cross-platform tools like Microsoft’s MAUI (Multi-platform App UI) to build applications that work on our choice of target environments. What’s needed is a way to use Skia from .NET, hence the SkiaSharp project.

SkiaSharp is currently lagging the Google build of Skia significantly and is based on the M118 build. This is currently about two years old, and while it still supports most of the requirements of .NET development, catching up with features would be good. Having extra eyes on pull requests and a focus on new technologies, like WebAssembly multitasking, should help move the platform forward and speed up development.

The foundations of .NET UI

Originally part of the Mono repository (and so used by Mono-based platforms like Xamarin), SkiaSharp gives you cross-platform bindings for various builds of the .NET platform with support for key technologies like WebAssembly and WinUI 3. Much of the time, this means you won’t be directly calling it; instead, you’re using a framework that has SkiaSharp render XAML components for you.

However, at times you need to get into the plumbing and build your own 2D graphics layer alongside familiar UI components. Perhaps you need a custom control that isn’t available, or you want to render images generated by, say, a CAD package or the output of a scientific computing application like a finite-element analysis mesh.

SkiaSharp and Linux

Like most .NET tools, SkiaSharp is available from NuGet, and can be installed from the .NET CLI. The development team takes care to point out that one library doesn’t fit all Linux distributions, and there are several official and community packages for specific target distributions and for x64 and Arm.

In practice, the core official Linux native package will work for popular Linuxes like Ubuntu, and the GitHub distribution provides tools to help you build your own releases on unsupported distributions. The SkiaSharp team has a mechanism for promoting community builds to the main release, based on popularity.

Building your own Linux tool requires cloning a couple of GitHub repositories from Mono and Google. A script ensures you have the right dependencies installed, and the library is compiled and built using the Ninja build tools. You can customize builds using arguments, for example, producing a C library rather than C++, or a version that doesn’t use a GPU.

Writing SkiaSharp code

You can find details of the SkiaSharp namespace on Microsoft Learn. Like most low-level libraries, there are a lot of classes; however, the basic approach to using SkiaSharp in your code is very similar to working with JavaScript’s 2D drawing tools, starting with a surface that hosts a canvas, where you then draw or render an image.

Looking at the namespace is more than a little intimidating, but using it in frameworks like MAUI is a lot easier than it first appears. Outside the base NuGet package, there are other higher-level packages that simplify working with SkiaSharp.

If you’re using MAUI, use the SkiaSharp.Views.Maui.Controls package. NuGet will install required dependencies when you install it through Visual Studio. You can now add the UseSkiaSharp method on a MauiAppBuilder object to start using its tools.

Once you have enabled SkiaSharp, your next step is to add a canvas to a page. By default, this fills the page, but you can use it in conjunction with other .NET controls using a XAML page description to lay out the controls. You can now add a drawing surface to the canvas, which holds the details of your 2D image. This can be a blank drawing surface or a pre-existing bitmap. Once you have a canvas, call the clear method on it to fill the canvas with your choice of color (the default is transparent).

Drawing on a SkiaSharp canvas

You can draw on the canvas with a paint object with a style and a color. Styles have associated parameters. For example, if you’re drawing a line with a stroke, you can choose the width of the line. Other options are a fill or a stroke and fill (which draws a line and fills its interior with a block of color).

With the canvas in place, you’re able to use SkiaSharp’s drawing primitives to add common shapes, like circles or rectangles. Other options support features like anti-aliasing, smoothing curves as needed. SkiaSharp has its own coordinate system, which may differ from that used by your choice of framework. It uses pixel-based measurements, so you will need to apply conversions to ensure that drawings are device-independent, using the canvas’ size property.

SkiaSharp’s low-level paint tools can animate images, redrawing them in different positions and with different colors. The development team recommends freeing up resources after each animation cycle, as doing it manually can be quicker than using .NET’s garbage collector.

Beyond shapes

Similar tools can render text on a canvas, painting the text in a chosen font, color, and size. Text is a graphical object like any other, so you can use the drawing tools to add special effects, such as only drawing text outlines. SkiaSharp can render bitmaps, with options for downloading, storing in local resource bundles as part of an application, or from the device directly. Again, you need your own scaling tools to display the image appropriately on the canvas.

Along with drawing primitives, there’s support for transforms that can help draw complex shapes from simple components. These allow you to move, scale, and rotate parts of a canvas. Other, more complex functions add effects that include color filters, alpha channel masks, blends, and shaders. Layering different effects and transformations on drawings and bitmaps gives you a lot of options, so you need a tool to help experiment with them.

Experimenting with Skia on the web

I wasn’t able to find a way to experiment directly with SkiaSharp code, though the Google Skia site provides a basic sandbox environment that mixes a web-based canvas with a REPL. Here you can write C++ Skia code and try out its options with a set of named “Fiddles” that provide sample operations that can easily be translated to .NET and to SkiaSharp.

Treating SkiaSharp like any other drawing API gives you plenty of flexibility, though it does require providing your own code for scaling to ensure images and text aren’t distorted. That’s to be expected, as it works at a much lower level than laying out controls on a XAML canvas. Here you’re doing all the work that WinUI 3 or Uno does for you, but building your own custom visualizations and controls, extending the platform the way you want and not having to rely on controls that don’t do what you need.

(image/jpeg; 2.18 MB)

How to use keyed services in ASP.NET Core 23 Oct 2025, 9:00 am

Dependency injection (also known as DI) is a design pattern in which the dependent types of a class are injected (passed to it by another class or object) rather than created directly, thereby facilitating loose coupling and promoting easier testing and maintenance.

In ASP.NET Core, both framework services and application services can be injected into your classes, rather than being tightly coupled. In this article, we’ll examine how we can work with keyed services to simplify dependency injection in ASP.NET Core.

To use the code examples provided in this article, you should have Visual Studio 2022 installed in your system. If you don’t already have a copy, you can download Visual Studio 2022 here.

Create an ASP.NET Core Web API project in Visual Studio 2022

To create an ASP.NET Core Web API project in Visual Studio 2022, follow the steps outlined below.

  1. Launch the Visual Studio 2022 IDE.
  2. Click on “Create new project.”
  3. In the “Create new project” window, select “ASP.NET Core Web API” from the list of templates displayed.
  4. Click “Next.”
  5. In the “Configure your new project” window, specify the name and location for the new project. Optionally check the “Place solution and project in the same directory” check box, depending on your preferences.
  6. Click “Next.”
  7. In the “Additional Information” window shown next, select “.NET 9.0 (Standard Term Support)” as the framework version and uncheck the check box that says “Use controllers.” We’ll be using minimal APIs in this project.
  8. Elsewhere in the “Additional Information” window, leave the “Authentication Type” set to “None” (the default) and make sure the check boxes “Enable Open API Support,” “Configure for HTTPS,” and “Enable Docker” remain unchecked. We won’t be using any of those features here.
  9. Click “Create.”

We’ll use this ASP.NET Core Web API project to work with the code examples given in the sections below.

What are keyed services?

Keyed services—i.e., services that you register and access using a unique key—provide an elegant way to handle multiple implementations of the same interface when working with dependency injection in ASP.NET Core. They enable you to simplify dependency injection without using custom factories or service locator patterns, thereby allowing multiple implementations of the same interface to coexist in harmony.

Recall that dependency injection eases the testing and maintenance of our code by allowing us to implement a service in one place, then use DI to insert it anywhere in our application where we want to use it. Let’s say we have a logger service that we want to use in six parts of our application. Without DI, we would need to implement the logger service six times, in all of these different places—resulting in duplication of code and convoluted testing and maintenance. If we later wanted to change the logger service, we would need to find and replace all of these implementations. With DI, we have only one implementation to change.

Keyed services offer additional advantages when using DI. They allow us to have multiple versions of a service and choose the version to use at run time. In addition, they eliminate the need to write boilerplate code, namely factories, when using multiple implementations of an interface. And they enhance testability by keeping our source code simple, modular, type-safe, extensible, and manageable.

Next, we’ll examine how we can implement keyed services in an ASP.NET Core application, using a custom logger.

Create a custom logger interface

First off, let’s create a new .cs file named ICustomLogger.cs and enter the following code to define an interface named ICustomLogger.

public interface ICustomLogger
{
    void Log(string message);
}

Create three logger classes

Next, we’ll create three new classes called FileLogger, DatabaseLogger, and EventLogger in three different files. Name the files FileLogger.cs, DBLogger.cs, and EventLogger.cs.

Write the following code in FileLogger.cs.

public class FileLogger : ICustomLogger
{
    public void Log(string message)
    {
        File.AppendAllText("log.txt", $"[FileLogger] {DateTime.Now}: {message}\n");
    }
}

Write the following code in DatabaseLogger.cs.

public class DatabaseLogger : ICustomLogger
{
    public void Log(string message)
    {
        Console.WriteLine($"[DatabaseLogger] {DateTime.Now}: {message}");
    }
}

And write the following code in EventLogger.cs.

public class EventLogger : ICustomLogger
{
    public void Log(string message)
    {
        Console.WriteLine($"[EventLogger] Event log {DateTime.Now}: {message}");
    }
}

Register the logger classes as keyed services

Before we can use our logger services, we must register them with the ASP.NET Core request processing pipeline. We can register these implementations of ICustomLogger in the Program.cs file using the following code.

var builder = WebApplication.CreateBuilder(args);
// Register multiple keyed services for the ICustomLogger interface
builder.Services.AddKeyedScoped("file");
builder.Services.AddKeyedScoped("database");
builder.Services.AddKeyedScoped("event");
var app = builder.Build();

Note how the FileLogger, DatabaseLogger, and EventLogger services are registered using the keys "file", "database", and "event", respectively.

Inject the keyed logger services

We can use the [FromKeyedServices] attribute to inject a specific implementation of our logger service in our minimal API endpoints as shown in the code snippet given below.

app.MapGet("/customlogger/file", ([FromKeyedServices("file")] ICustomLogger fileLogger) =>
{
    fileLogger.Log("This text is written to the file system.");
    return Results.Ok("File logger executed successfully.");
});
app.MapGet("/customlogger/db", ([FromKeyedServices("database")] ICustomLogger databaseLogger) =>
{
    databaseLogger.Log("This text is stored in the database.");
    return Results.Ok("Database logger executed successfully.");
});
app.MapGet("/customlogger/event", ([FromKeyedServices("event")] ICustomLogger logger) =>
{
    logger.Log("This text is recorded in the event system.");
    return Results.Ok("Event logger executed successfully.");
});

Thus, by using DI and keyed services, we can implement each of our logger services once, then simply ask for the right type of the logger when we need one without having to use a factory to instantate the logger. And whenever we want to swap the implementations—from FileLogger to DatabaseLogger, for example—all we need to do is change the configuration we specied while registering the services with the container. The DI system will plug in the right logger automatically at run time.

Complete keyed services example – minimal APIs

The complete source code of the Program.cs file is given below for your reference.

var builder = WebApplication.CreateBuilder(args);
// Register multiple keyed services for the ICustomLogger interface
builder.Services.AddKeyedScoped("file");
builder.Services.AddKeyedScoped("database");
builder.Services.AddKeyedScoped("event");
// Add services to the container.
builder.Services.AddControllers();
var connectionString = builder.Configuration.GetConnectionString("DefaultConnection");
builder.Services.AddDbContext(options =>
{
    options.UseSqlServer(
      builder.Configuration["ConnectionStrings:DefaultConnection"]);
});
var app = builder.Build();
// Configure the HTTP request pipeline.
app.MapGet("/customlogger/file", ([FromKeyedServices("file")] ICustomLogger fileLogger) =>
{
    fileLogger.Log("This text is written to the file system.");
    return Results.Ok("File logger executed successfully.");
});
app.MapGet("/customlogger/db", ([FromKeyedServices("database")] ICustomLogger databaseLogger) =>
{
    databaseLogger.Log("This text is stored in the database.");
    return Results.Ok("Database logger executed successfully.");
});
app.MapGet("/customlogger/event", ([FromKeyedServices("event")] ICustomLogger logger) =>
{
    logger.Log("This text is recorded in the event system.");
    return Results.Ok("Event logger executed successfully.");
});
app.UseHttpsRedirection();
app.MapControllers();
app.Run();

public interface ICustomLogger
{
    void Log(string message);
}
public class FileLogger : ICustomLogger
{
    public void Log(string message)
    {
        File.AppendAllText("log.txt", $"[FileLogger] {DateTime.Now}: {message}\n");
    }
}
public class DatabaseLogger : ICustomLogger
{
    public void Log(string message)
    {
        Console.WriteLine($"[DatabaseLogger] {DateTime.Now}: {message}");
    }
}
public class EventLogger : ICustomLogger
{
    public void Log(string message)
    {
        Console.WriteLine($"[EventLogger] Event log {DateTime.Now}: {message}");
    }
}

Injecting keyed services in controllers

If you’re using API controllers in your application, you can inject these keyed services using the constructor of your controller class as shown in the following code.

[ApiController]
[Route("api/customlogger")]
public class CustomLoggerController : ControllerBase
{
    private readonly ILogger _fileLogger;
    private readonly ILogger _databaseLogger;
    private readonly ILogger _eventLogger;

    public CustomLoggerController(
        [FromKeyedServices("file")] ICustomLogger fileLogger,
        [FromKeyedServices("database")] ICustomLogger databaseLogger,
        [FromKeyedServices("event")] ICustomLogger eventLogger)
    {
        _fileLogger = fileLogger;
        _databaseLogger = databaseLogger;
        _eventLogger = eventLogger;
    }
}

Complete keyed services example – controllers

The complete source code of the CustomLoggerController class is given in the code listing below.

[ApiController]
[Route("api/customlogger")]
public class CustomLoggerController : ControllerBase
{
    private readonly ICustomLogger _fileLogger;
    private readonly ICustomLogger _dbLogger;
    private readonly ICustomLogger _eventLogger;
    public CustomLoggerController(
        [FromKeyedServices("file")] ICustomLogger fileLogger,
        [FromKeyedServices("database")] ICustomLogger dbLogger,
        [FromKeyedServices("event")] ICustomLogger eventLogger)
    {
        _fileLogger = fileLogger;
        _dbLogger = dbLogger;
        _eventLogger = eventLogger;
    }
    [HttpPost("file")]
    public IActionResult LogToFile([FromBody] string message)
    {
        _fileLogger.Log(message);
        return Ok("File logger invoked.");
    }
    [HttpPost("database")]
    public IActionResult LogToDatabase([FromBody] string message)
    {
        _dbLogger.Log(message);
        return Ok("Database logger invoked.");
    }
    [HttpPost("event")]
    public IActionResult LogToEvent([FromBody] string message)
    {
        _eventLogger.Log(message);
        return Ok("Event logger invoked.");
    }
}

Key takeaways

Keyed services enable you to register multiple implementations of the same interface and resolve them at runtime using a specific key that identifies each of these implementations. This helps you to select a particular service dynamically at runtime and provide a cleaner and type-safe alternative to using service factories or manual service resolution techniques. That said, you should avoid using keyed services unless you need to select from multiple implementations of an interface at run time. This is because resolving dependencies at run time introduces additional complexity and performance overhead, especially if it involves many dependencies. Additionally, using keyed services can lead to wiring up dependencies you don’t really need and complicate dependency management in your application.

(image/jpeg; 13.9 MB)

A practitioner’s primer on deterministic application modernization 23 Oct 2025, 9:00 am

Large organizations rarely have just a handful of applications. They have thousands, often representing billions of lines of code. These code bases span decades of frameworks, libraries, and shifting best practices. The result: outdated APIs, inconsistent conventions, and vulnerabilities that put delivery and security at risk.

Manual refactoring doesn’t scale in this environment. OpenRewrite was created to solve this.

OpenRewrite is an open-source automated refactoring framework that enables safe, deterministic modernization for developers. It rests on two pillars:

  • Lossless Semantic Trees (LSTs): a compiler-accurate, rich data representation of source code.
  • Recipes: modular and deterministic programs that perform transformations.

Together, these provide a foundation for application modernization that is repeatable, auditable, and scalable.

Lossless Semantic Trees: Knowing even more than the compiler

Most automated refactoring tools work with basic text patterns or Abstract Syntax Trees (ASTs). ASTs are the backbone of compilers, but they’re not designed for modernization. They strip out comments, whitespace, and formatting, and they don’t resolve method overloads, generics, or dependencies across classpaths. They give you what the code says, not what it means. This leads to problems: missing context, broken or missing formatting, and incorrect assumptions about what code actually means.

OpenRewrite takes a fundamentally different approach with Lossless Semantic Trees. Consider this example code snippet:

import org.apache.log4j.Logger;
import com.mycompany.Logger; // Custom Logger class

public class UserService {
    private static final Logger log = Logger.getLogger(UserService.class);
    private com.mycompany.Logger auditLog = new com.mycompany.Logger();
    
    public void processUser() {
        log.info("Processing user");        // log4j - should be migrated
        auditLog.info("User processed");    // custom - should NOT be migrated
    }
}

Text-based tools trying to migrate from Log4j to SLF4J might search and replace log.info() calls, but they can’t tell which logger is which. This results in having to slog through false positives, such as incorrectly identifying the custom company logger that should be left alone, or it could also miss other classes that happen to have info() methods.

ASTs understand code structure better than text patterns; they know what’s a method call versus a string literal, for example. But ASTs still can’t tell you which Logger class each variable actually references, or what the real type is behind each log.info() call. On top of the missing semantic information, ASTs strip away all formatting, whitespace, and comments.

How LSTs work differently

LSTs solve these problems by preserving everything that matters while adding complete semantic understanding. All comments stay exactly where they belong, and formatting, indentation, and whitespace are maintained. So pull requests and commit diffs look clean because unchanged code stays identical.

Plus, LSTs resolve types across your entire code base, including:

  • Method overloads (which findById method is actually being called?)
  • Generic parameters (what type is inside that List?)
  • Inheritance relationships (what methods are available on this object?)
  • Cross-module dependencies (types defined in other JARs)

This semantic understanding enables surgical precision that simpler tools can’t achieve. For instance, in the following example, when a recipe targets java.util.List, it will only modify the first line—no false positives.

import java.util.List;
import com.mycompany.List; // Custom List class

private List data;     // LST knows this is java.util.List
private com.mycompany.List items; // LST knows this is the custom class
OpenRewrite - AST vs LST

Moderne

Recipes: Deterministic code transformation

With the LST as the underlying model, recipes provide the mechanism for change. A recipe is a program that traverses the LST, finds patterns, and applies transformations. It’s like a querying language for an LST.

Unlike ad hoc scripts or probabilistic AI suggestions, recipes are:

  • Deterministic: the same input always produces the same output.
  • Repeatable: they can be applied across one repo or thousands.
  • Composable: small recipes can be combined into large playbooks.
  • Auditable: version-controlled, testable, and reviewable.
  • Idempotent: no matter how many times you run them, the result is the same.
  • Battle-tested: validated with frequent testing by an active open source community on thousands of public code repositories.

OpenRewrite recipe constructs

OpenRewrite supports two main approaches to writing recipes, each suited to different types of transformations.

Declarative recipes. Most recipes are written declaratively using YAML configuration. These are easy to write, read, and maintain without requiring Java programming knowledge. They typically reference existing recipes from the OpenRewrite recipe catalog, which contains hundreds of pre-built transformations for common frameworks and libraries, or they can allow you to compose multiple custom recipes together.

Declarative recipes can be as simple as referencing one recipe or as complex as orchestrating complete migrations. Here is a relatively simple example of a partial migration:

type: specs.openrewrite.org/v1beta/recipe
name: com.example.MigrateToJUnit5
displayName: Migrate from JUnit 4 to JUnit 5
recipeList:
- org.openrewrite.java.ChangeType:
oldFullyQualifiedTypeName: org.junit.Test
newFullyQualifiedTypeName: org.junit.jupiter.api.Test
- org.openrewrite.java.ChangeType:
oldFullyQualifiedTypeName: org.junit.Assert
newFullyQualifiedTypeName: org.junit.jupiter.api.Assertions
- org.openrewrite.maven.AddDependency:
groupId: org.junit.jupiter
artifactId: junit-jupiter
version: 5.8.2



Imperative recipes. For complex transformations that require custom logic, recipes can be written as Java programs. These imperative recipes give you full control over the transformation process and access to the complete LST structure.

Running recipes uses a well-established computer science pattern called the visitor pattern. But here’s the key. Recipes don’t visit your source code directly—they visit the LST representation.

The process works like this:

  1. Source code is parsed into an LST (with full semantic information).
  2. The visitor traverses the LST nodes systematically.
  3. Transformations modify the LST structure.
  4. The LST is written back to source code.

Think of a recipe like a building inspector, but instead of walking through the actual building (source code repository), the inspector is working from detailed architectural blueprints (LST):

  1. You walk through every room (LST node) systematically.
  2. At each room, you check if it needs attention (does this method call match our pattern?).
  3. If work is needed, you make the change (modify the LST node).
  4. If no work is needed, you move on.
  5. You automatically handle going to the next room (LST traversal).

Here is a simple example of what an LST (with all the semantic information and preserved formatting) looks like compared to source code:

// Source code snippet
// The user's name
private String name = "Java";

// LST representation
J.VariableDeclarations | "// The user's name\nprivate String name = "Java""
|---J.Modifier | "private"
|---J.Identifier | "String"
\---J.VariableDeclarations.NamedVariable | "name = "Java""
    |---J.Identifier | "name"
    \---J.Literal | ""Java""

Recipes work with many different file types including XML or YAML to modify things like Maven POMs or other configuration files. They also can create new files when needed as part of migrations. But recipes don’t even have to modify code at all. A powerful feature and benefit of the rich data from the LST is that they may just gather data and insights, analyzing code bases to generate data tables to be used for reports, metrics, or visualizations that help teams understand their code before making changes.

Testing recipes: Deterministic and reliable

OpenRewrite’s deterministic nature makes recipes easy to test. Here’s how simple it is to visualize the changes a recipe should make, and to verify it works correctly:

@Test
void migrateJUnitTest() {
    rewriteRun(
        // The recipe to test
        new MigrateToJUnit5(),
        
        // Before: JUnit 4 code
        java("""
            import org.junit.Test;
            
            public class MyTest {
                @Test
                public void testSomething() {}
            }
            """),
            
        // After: Expected JUnit 5 result
        java("""
            import org.junit.jupiter.api.Test;
            
            public class MyTest {
                @Test
                public void testSomething() {}
            }
            """)
    );
}

This test framework validates that the recipe produces exactly the expected output—no more, no less. Because recipes are deterministic, the same input always produces the same result, making them reliable and testable at scale.

Isn’t AI enough for app modernization?

With AI assistants like GitHub Copilot or Amazon Q Developer available, it’s natural to ask, can’t AI just handle modernization?

AI is powerful, but not for modernizing code at scale, for the following reasons:

  • Context is limited. Models can’t see your full dependency graph or organizational conventions.
  • Outputs are probabilistic. Even a 1% error rate means thousands of broken builds in a large estate.
  • Repeatability is missing. Each AI suggestion is unique, not reusable across repos.
  • RAG doesn’t scale. Retrieval-augmented generation can’t handle billions of lines of code coherently. The more context, the more confusion.

AI does excel in complementary areas including summarizing code, capturing developer intent, writing new code, and explaining results. Whereas OpenRewrite excels at making consistent, accurate changes to your code file-by-file, repo-by-repo.

However, AI and OpenRewrite recipes can work in concert through tool calling. AI interprets queries and orchestrates recipes, while OpenRewrite performs the actual transformations with compiler accuracy. AI can also accelerate the authoring of recipes themselves, reducing the time from idea to working automation. That’s a safer application of AI than making the changes directly, because recipes are deterministic, easily testable, and reusable across repos.

At-scale app modernization

Individual developers can run OpenRewrite recipes directly with build tools using mvn rewrite:run for Maven or gradle rewriteRun for Gradle. This is ideal for a single repository, but LSTs are built in-memory and are transient, so the approach breaks down when you need to run recipes across multiple repos. Scaling this to hundreds or thousands of code bases means repeating the process manually, repo-by-repo, and it quickly becomes untenable at scale.

With the scale at which large enterprises operate, modernizing thousands of repositories containing billions of lines of code is where the real test lies. OpenRewrite provides the deterministic automation engine, but organizations need more than an engine, they need a way to operate it across their entire application portfolio. That’s where Moderne comes in.

Moderne is the platform that horizontally scales OpenRewrite. It can execute recipes across thousands of repositories in parallel, managing organizational hierarchies, integrating with CI/CD pipelines, and tracking results across billions of lines of code. In effect, Moderne turns OpenRewrite from a powerful framework into a mass-scale modernization system.

With Moderne, teams can:

  • Run a Spring Boot 2.7 to 3.5 migration across hundreds of apps in a single coordinated campaign.
  • Apply security patches to every repository, not just the most business-critical ones.
  • Standardize logging, dependency versions, or configuration across the entire enterprise.
  • Research and understand their code at scale; query billions of lines in minutes to uncover usage patterns, dependencies, or risks before making changes.
  • See the impact of changes through dashboards and reports, building confidence in automation.

This is modernization not as a series of siloed projects, but as a continuous capability. Practitioners call it tech stack liquidity—the ability to evolve an entire software estate as fluidly as refactoring a single class.

A recipe for addressing technical debt

Frameworks evolve, APIs deprecate, security standards tighten. Without automation, organizations quickly drown in technical debt.

OpenRewrite provides a deterministic foundation for tackling this problem. Its Lossless Semantic Trees deliver a full-fidelity representation of code, and its recipes make transformations precise, repeatable, and auditable. Combined with Moderne’s platform, it enables at-scale modernization across billions of lines of code.

AI will continue to play an important role, making modernization more conversational and accelerating recipe authoring. But deterministic automation is the foundation that makes modernization safe, scalable, and sustainable. With OpenRewrite, you can evolve your code base continuously, safely, and at scale, future-proofing your systems for decades to come.

New Tech Forum provides a venue for technology leaders—including vendors and other outside contributors—to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to doug_dineley@foundryco.com.

(image/jpeg; 9.96 MB)

PyTorch team unveils framework for programming clusters 23 Oct 2025, 2:19 am

The PyTorch team at Meta, stewards of the PyTorch open source machine learning framework, has unveiled Monarch, a distributed programming framework intended to bring the simplicity of PyTorch to entire clusters. Monarch pairs a Python-based front end, supporting integration with existing code and libraries such as PyTorch, and a Rust-based back end, which facilitates performance, scalability, and robustness, the team said. .

Announced October 22, Monarch is a framework based on scalable actor messaging that lets users program distributed systems the way a single machine would be programmed, thus hiding the complexity of distributed computing, the PyTorch team said. Monarch is currently in an experimental stage; installation instructions can be found at meta-pytorch.org.

Monarch organizes processes, actors, and hosts into a scalable multidimensional array, or mesh, that can be manipulated directly. Users can operate on entire meshes, or slices of them, with simple APIs, with Monarch handling distribution and vectorization automatically. Developers can write code as if nothing fails, according to the PyTorch team. But when something does fail, Monarch fails fast by stopping the whole program. Later on, users can add fine-grained fault handling where needed, catching and recovering from failures.

Monarch splits control plane messaging from data plane transfers, enabling direct GPU-to-GPU memory transfers across a cluster. Commands are sent through one path while data moves through another.  Monarch integrates with PyTorch to provide tensors that are sharded across clusters of GPUs. Tensor operations look local but are executed across distributed large clusters, with Monarch handling the complexity of coordination across thousands of GPUs, the PyTorch team said.

The PyTorch team warned that in Monarch’s current stage of development, users should expect bugs, incomplete features, and APIs that may change in future versions.

(image/jpeg; 8.39 MB)

Serious vulnerability found in Rust library 23 Oct 2025, 12:10 am

Developers creating projects in the Rust programming language, as well as IT leaders with Rust-based applications in their environments, should pay attention to a serious vulnerability found in one of the programming language’s libraries.

Researchers at Edera say they have uncovered a critical boundary-parsing bug, dubbed TARmageddon (CVE-2025-62518), in the popular async-tar Rust library. And not only is it in this library, but also in its many forks, including the widely used tokio-tar.

“In the worst-case scenario, this vulnerability has a severity of 8.1 (High) and can lead to Remote Code Execution (RCE) through file overwriting attacks, such as replacing configuration files or hijacking build backends,” the researchers say in a report. Among the other possible impacts is the spread of the infection via applications, also known as a supply chain attack.

The first recommended action is to patch all active forks, since this vulnerability impacts major, widely-used projects, the researchers say, including uv (Astral’s Python package manager), testcontainers, and wasmCloud. “Due to the widespread nature of tokio-tar in various forms, it is not possible to truly quantify upfront the blast radius of this bug across the ecosystem,” they say.

To make things worse, the researchers warn, the highly downloaded tokio-tar remains unpatched, probably because it’s no longer actively maintained.

Edera suggests that developers who rely on tokio-tar consider migrating to an actively maintained fork such as astral-tokio-tar version 0.5.6 or later, which has been patched.

IT leaders also need to scan their applications to see if any were developed in Rust and are at risk.

Why is it critical?

TAR files are used in Unix and Linux systems for bundling multiple directories and files into an archive file that retains the full directory structure and metadata of the original information, explains Robert Beggs, head of Canadian incident response firm DigitalDefence. Archive files are commonly used in backups, or for packing software for purposes such as distributing source code.

Because of the way in which particular versions of the TAR libraries have been written, a potential vulnerability exists, he said in an email to CSO, noting,  “In the worst case, it would allow an attacker to execute arbitrary code on a host system and engage in malicious actions, such as overwriting critical files (configuration files, build scripts), or gaining unauthorized filesystem access.” Exploitation could also result in the compromise of any system that extracts files from the malicious TAR.  

“The vulnerability is especially serious because the vulnerable TAR libraries are often present as part of applications that are not actively maintained, and may be missed when patching or otherwise mitigating the issue,” he added.

While there are as yet no known exploits of this vulnerability, Beggs said that can change quickly. “It is a high severity vulnerability  — 8.1 on a scale of 1 to 10 –,” he pointed out, “so it will likely attract the attention of attackers.”

Recommendations

He recommends infosec leaders: 

  • audit code to identify dependencies for forks or wrappers of tokio-tar and ensure that they are also patched
  • review usage of TAR files in continuous integration/continuous deployment environments as well as containers, and ensure that they are patched;
  • isolate (sandbox) archives when processing, and avoid extracting TAR files from untrusted sources;
  • continue to monitor for possible exploits or further vulnerabilities associated with the library.

Admins may also be interested in this advisory explaining the problem created by Astral Security, which maintains astral-tokio-tar.

The bug was discovered in July and disclosed that month to maintainers of all libraries, the Rust Foundation, and a certain number of projects. It was agreed details wouldn’t be released until this week.

Because the most popular fork (tokio-tar, with over 5 million downloads on crates.io) appears to be no longer actively maintained, Edera co-ordinated a decentralized disclosure across the complex fork lineage.

Possible consequences

The vulnerability is a desynchronization flaw that allows an attacker to ‘smuggle’ additional archive entries into TAR extractions, says Edera. It occurs when processing nested TAR files that exhibit a specific mismatch between their PAX extended headers and ustar headers. The flaw stems from the parser’s inconsistent logic when determining file data boundaries.

Among the possible infection scenarios painted by Edera are

  • an attack on Python package managers using tokio-tar. An attacker uploads a malicious package to the open source PyPI repository, from which developers download useful utilities. The package’s outer TAR container has a legitimate file but the hidden inner TAR contains a malicious one that hijacks the build backend. This hidden inner TAR introduces unexpected or overwritten files, which compromises the test environment and pollutes the supply chain;
  • an attack on any system with separate ‘scan/approve’ phases.  A security scanner analyzes the outer, clean TAR and approves its limited file set. However, the extraction process using the vulnerable library pulls in additional, unapproved, and unscanned files from the inner TAR, resulting in a security control bypass and policy violation.

Rust developers say the language allows the creation of memory-safe applications, but, say Edera researchers, “the discovery of TARmageddon is an important reminder that Rust is not a silver bullet.”

“It does not eliminate logic bugs,” the report points out, “and this parsing inconsistency is fundamentally a logic flaw. Developers must remain vigilant against all classes of vulnerabilities, regardless of the language used.”

The report is also a reminder of the hazards of relying on unmaintained open source libraries in code.

This article originally appeared on CSOonline.

(image/jpeg; 15.45 MB)

85% of developers use AI regularly – JetBrains survey 22 Oct 2025, 9:18 pm

AI usage has become standard practice in software development, with 85% of developers in a recent JetBrains survey citing regular use of AI tools for coding and development. Additionally, 62% were relying on at least one AI-powered coding assistant, agent, or code editor. Only 15% of respondents had not adopted AI tools in their daily work.

These findings were included in JetBrains’ State of the Developer Ecosystem Report 2025, which was unveiled October 15. The survey covered topics including the use of AI tools, which programming languages developers currently use and want to use, and their perceptions of the current job market for developers.

The JetBrains report notes that 68% of developers anticipate AI proficiency will become a job requirement. Some 29% of developers said they were hopeful about the increasing role of AI in society and 22% said they were excited. However, 17% reported being anxious and 6% fearful. The most commonly used AI tools among the developers were ChatGPT (41%) and GitHub Copilot (30%). The top five concerns about AI reported by the developers were the quality of code (23%), limited understanding of complex code and logic by AI tools (18%), privacy and security (13%), negative effect on coding and development skills (11%), and lack of context awareness (10%). The top five benefits of using AI in coding and software development, the developers reported, were increased productivity (74%), faster completion of repetitive tasks (73%), less time spent searching for information (72%), faster coding and development (69%), and faster learning of new tools and technologies (65%).

JetBrains’s report was based on responses from 24,534 developers across 194 countries, who were surveyed from April to June. China had the most respondents, with 20%, while the USA was second, at 13%, followed by India, at 12%.

Other findings of JetBrains’ State of the Developer Ecosystem Report 2025:

  • The top five primary languages were Python (35%), Java (33%), JavaScript (26%), TypeScript (22%), and HTML/CSS (16%).
  • The top five languages developers most want to adopt next were Go (11%), Rust (10%), Python (7%), Kotlin (6%), and TypeScript (5%).
  • 75% of the developers surveyed build websites and business software, while areas like games and AR/VR represent smaller segments.
  • Amazon Web Services (AWS) is used by the largest share of the developers surveyed (43%), followed by Google Cloud Platform (22%) and Microsoft Azure (22%).
  • 61% of junior developers find the job market challenging while 34% of senior developers share the same concern.

(image/jpeg; 8.71 MB)

The best new features in Java 25 22 Oct 2025, 9:00 am

Java continues its fast and feature-packed release schedule, with JDK 25 delivering syntax improvements, simplifications, and performance enhancements. This article looks at the core updates in Java 25, with code and usage examples. JDK 25 is now the most current long-term support (LTS) release, and it’s packed with new features to explore.

Simpler source files and instance main methods

Over the years, Java has consistently moved away from its verbose syntax and toward greater simplicity. A longstanding hurdle for Java beginners was the verbosity of defining even a simple “Hello World” program:

public class HelloWorld {
    public static void main(String[] args) {
        System.out.println("Hello, InfoWorld!");
    }
}

A beginner had to either ignore mysterious keywords like public, class, and static or learn about visibility, classes, and static members before ever touching on Java syntax basics.

Java 25 addresses this issue with JEP 512: Compact source files and instance main methods, which lets us write the same program as:


void main() {
    IO.println("Hello, World!");
}

The most obvious thing to notice is that all the OOP syntax is gone. Of course, you can add this code back in as needed. But also notice the IO.println() call. The basic IO libraries were moved into the Java IO package, which is part of java.lang and does not require an explicit import. This addresses another longstanding gripe for Java developers, of how much code was required simply to output to the console.

New flexible constructors

Another win for flexibility over formality, JEP 513: Flexible constructor bodies lands in Java 25, following a long incubation period. Flexible constructors, as the name suggests, make constructors less rigid. Specifically, Java no longer requires calling super() or this() as the first thing that happens in a constructor.

Previously, even if you did not explicitly call super() or this(), the compiler would do it for you. And, if you tried calling them elsewhere, it would cause an error. All of this is now more flexible.

There still are some rules that must be observed, but it’s now possible to run initialization code on the class itself before calling super(). This avoids situations where you might have to do additional work because the subclass initialization obviates the superclass initialization. (For more about this, look at the Person->Employee example for JEP 513.)

To fully understand the benefits of flexible constructors, consider an example where you have a Shape superclass, and you want the constructor to accept an area:

class Shape {
    final int area;
    public Shape(int area) {
        if (area 

Now say you want to have a Rectangle class. In Java before JDK 25, you’d have to somehow extract the calculation to use it in the super() call, usually using a static method:


// The old way
class Rectangle extends Shape {
    private static int checkAndCalcArea(int w, int h) {
        if (w 

This code is quite clunky. But in Java 25, it’s easier to follow your intention, and run the area calculation in the Rectangle constructor:

class Rectangle extends Shape {
    final int width;
    final int height;

    public Rectangle(int width, int height) {
        if (width 

Wholesale module imports

Another feature finalized in JDK 25, JEP 511: Module import declarations, lets you import an entire module instead of having to import each package one by one.

Although there are some nuances (like the need to explicitly resolve package name collisions), the basic idea is that instead of having to import every package in a module, you can just import the entire module with all its packages included. This example from the JEP shows the old way of importing the contents of a module:

import java.util.Map;                   // or import java.util.*;
import java.util.function.Function;     // or import java.util.function.*;
import java.util.stream.Collectors;     // or import java.util.stream.*;
import java.util.stream.Stream;         // (can be removed)

String[] fruits = new String[] { "apple", "berry", "citrus" };
Map m =
    Stream.of(fruits)
          .collect(Collectors.toMap(s -> s.toUpperCase().substring(0,1),
                                    Function.identity()));

Now you can just type:

import java.base; // Imports the module containing the required packages

Scoped values

Scoped values are a convenient and efficient alternative to thread-local variables, especially for use with virtual threads and structured concurrency.

A common example of using thread local variables is in web apps, where you want to put data into memory for only the specific thread. For example, you might recover the user data and place it in memory to be used by whatever business logic is executed along the way.

The drawbacks of thread local variables become more evident with virtual threads. Apps using virtual threads may spawn hundreds or even thousands of concurrent virtual threads, and the parent thread will share the thread local variable with all of them. Thread locals also live for the entire life of the thread and are fully mutable.

Scoped values in JDK 25 (JEP 506) are immutable and only live for the life of the calling method. That means they can be efficiently shared among any number of virtual threads, and it is simple for developers to define their lifespan.

Scoped values let you declare the shared value and pass in a function that initiates all subsequent work:


ScopedValue.where(NAME, ).run(() -> { 
//... NAME.get() ... call methods using NAME ... 
});

In this case, NAME itself is the scoped value:

static final ScopedValue<...> NAME = ScopedValue.newInstance();

Primitive types in patterns

Still in preview in Java 25, JEP 507 lets us use primitives like int and double in patterns, instanceof, and switch. Although this looks like a minor improvement, it’s another step toward eventually uniting primitives and objects in Project Valhalla.

Compact object headers

Compact object headers (JEP 519) improve memory performance in the JVM. When you update to Java 25, you get this feature for free, without doing anything extra. It reduces memory overhead, which for some applications can result in a significant gain.

Object headers store the information the JVM uses to describe an object’s metadata in memory. This change uses clever coding to compress object headers into a smaller footprint. The gain in memory efficiency also reduces the frequency of garbage collection.

Generational garbage collection

Although the G1 collector remains the standard, the Shenandoah option is popular for low-latency demands in server applications. It avoids potential lags during garbage collection and was recently updated to include generational garbage collection. This feature is now standard when using Shenandoah.

While most heap objects are allocated and disposed of quickly, long-lived objects are less common. Generational garbage collection takes advantage of that fact by allocating memory in the heap, then sweeping it after a short time to dispose of unused objects. It then promotes long-lived objects into the longer-term collection space.

As a rule of thumb, when you need maximum throughput (that is, overall performance) stick with the G1 default. If you need minimum latency, meaning the fewest possible pauses for garbage collection, consider Shenandoah.

To use Shenandoah, include the following switch when running Java:

java -XX:+UseShenandoahGC

Conclusion

When I first started using Java, it was an up-and-coming language that I appreciated for its power, especially in web application development. Although I dabble in JVM alternatives like Kotlin and Clojure, I still have a deep affection for Java. It’s nice to see the Java team’s commitment to keeping Java relevant, even when it means refactoring core parts of the language.

(image/jpeg; 2.47 MB)

How to use Python dataclasses 22 Oct 2025, 9:00 am

Everything in Python is an object, or so the saying goes. If you want to create your own custom objects, with their own properties and methods, you use Python’s class object to do it. But creating classes in Python sometimes means writing loads of repetitive, boilerplate code; for example, to set up the class instance from the parameters passed to it or to create common functions like comparison operators.

Dataclasses, introduced in Python 3.7 (and backported to Python 3.6), provide a handy, less-verbose way to create classes. Many of the common things you do in a class, like instantiating properties from the arguments passed to the class, can be reduced to a few basic instructions by using dataclasses.

The backstage power of Python dataclasses

Consider this example of a conventional class in Python:


class Book:
    '''Object for tracking physical books in a collection.'''
    def __init__(self, name: str, weight: float, shelf_id:int = 0):
        self.name = name
        self.weight = weight # in grams, for calculating shipping
        self.shelf_id = shelf_id
    def __repr__(self):
        return(f"Book(name={self.name!r},
            weight={self.weight!r}, shelf_id={self.shelf_id!r})")

The biggest headache here is that you must copy each of the arguments passed to __init__ to the object’s properties. This isn’t so bad if you’re only dealing with Book, but what if you have additional classes—say, a Bookshelf, Library, Warehouse, and so on? Plus, typing all that code by hand increases your chances of making a mistake.

Here’s the same class implemented as a Python dataclass:


from dataclasses import dataclass

@dataclass
class Book:
    '''Object for tracking physical books in a collection.'''
    name: str
    weight: float 
    shelf_id: int = 0

When you specify properties, called fields, in a dataclass, the @dataclass decorator automatically generates all the code needed to initialize them. It also preserves the type information for each property, so if you use a linting too that checks type information, it will ensure that you’re supplying the right kinds of variables to the class constructor.

Another thing @dataclass does behind the scenes is to automatically create code for common dunder methods in the class. In the conventional class above, we had to create our own __repr__. In the dataclass, the @dataclass decorator generates the __repr__ for you. While you still can override the generated code, you don’t need to manually write code for the most common cases.

Once a dataclass is created, it is functionally identical to a regular class. There is no performance penalty for using a dataclass. There’s only a small performance penalty for declaring the class as a dataclass, and that’s a one-time cost when the dataclass object is created.

Advanced Python dataclass initialization

The dataclass decorator can take initialization options of its own. Most of the time, you won’t need to supply them, but they can come in handy for certain edge cases. Here are some of the most useful ones (they’re all True/False):

  • frozen: Generates class instances that are read-only. Once data has been assigned, it can’t be modified. This is useful if instances of the dataclass are intended to be hashable, which allows them (among other things) to be used as dictionary keys. If you set frozen, the generated dataclass will also automatically have a __hash__ method created for it. (You also can use unsafe_hash=true to generate a __hash__ method for the dataclass, regardless of whether the dataclass is read-only or not, but that call invokes unsafe behavior.)
  • slots: Allows instances of dataclasses to use less memory by only allowing fields explicitly defined in the class. The memory savings really only manifest at scale — e.g., when generating upwards of thousands of instances of a given object. If you’re only generating a couple of dataclass instances at a time, it probably isn’t worth it.
  • kw_only: This setting makes all fields for the class keyword-only, so they must be defined using keyword arguments rather than positional arguments. This is a useful way to provide a dataclass instance’s arguments by way of a dictionary.

Customizing Python dataclass fields

How dataclasses work by default should be okay for the majority of use cases. Sometimes, though, you need to fine-tune how the fields in your dataclass are initialized. The following code sample demonstrates how to use the field function for fine-tuning:


from dataclasses import dataclass, field
from typing import List

@dataclass
class Book:
    '''Object for tracking physical books in a collection.'''
    name: str     
    condition: str = field(compare=False)    
    weight: float = field(default=0.0, repr=False)
    shelf_id: int = 0
    chapters: List[str] = field(default_factory=list)

When you set a default value to an instance of field, it changes how the field is set up depending on what parameters you provide. These are the most commonly-used options for field (though there are others):

  • default: Sets the default value for the field. You should use default if you a) use field to change any other parameters for the field, and b) want to set a default value on the field on top of that. In the above example, we used default to set weight to 0.0.
  • default_factory: Provides the name of a function, which takes no parameters, that returns some object to serve as the default value for the field. In the example, we wanted chapters to be an empty list.
  • repr: By default (True), controls if the field in question shows up in the automatically generated __repr__ for the dataclass. In this case, we didn’t want the book’s weight shown in the __repr__, so we used repr=False to omit it.
  • compare: By default (True), includes the field in the comparison methods automatically generated for the dataclass. Here, we didn’t want condition to be used as part of the comparison for two books, so we set compare=False.

Note that we adjusted the order of the fields so the non-default fields appeared first.

Controlling Python dataclass initialization

At this point, you might be wondering, “How do I get control over the init process to make more fine-grained changes if the __init__ method of a dataclass is generated automatically?” In these cases, you can use the __post_init__ method or or InitVar type.

__post_init__

If you include the __post_init__ method in your dataclass definition, you can provide instructions for modifying fields or other instance data:


from dataclasses import dataclass, field
from typing import List

@dataclass
class Book:
    '''Object for tracking physical books in a collection.'''
    name: str    
    weight: float = field(default=0.0, repr=False)
    shelf_id: Optional[int] = field(init=False)
    chapters: List[str] = field(default_factory=list)
    condition: str = field(default="Good", compare=False)

    def __post_init__(self):
        if self.condition == "Discarded":
            self.shelf_id = None
        else:
            self.shelf_id = 0

In this example, we’ve created a __post_init__ method to set shelf_id to None if the book’s condition is initialized as "Discarded". Note how we use field to initialize shelf_id, and pass init as False to field. This means shelf_id won’t be initialized in __init__, but it is registered as a field with the dataclass overall, with type information.

InitVar

Another way to customize Python dataclass setup is to use the InitVar type. This lets you specify a field that will be passed to __init__ and then to __post_init__, but won’t be stored in the class instance.

By using InitVar, you can take in parameters when setting up the dataclass that are only used during initialization. Here’s an example:


from dataclasses import dataclass, field, InitVar
from typing import List

@dataclass
class Book:
    '''Object for tracking physical books in a collection.'''
    name: str     
    condition: InitVar[str] = "Good"
    weight: float = field(default=0.0, repr=False)
    shelf_id: int = field(init=False)
    chapters: List[str] = field(default_factory=list)

    def __post_init__(self, condition):
        if condition == "Unacceptable":
            self.shelf_id = None
        else:
            self.shelf_id = 0

Setting a field’s type to InitVar (with its subtype being the actual field type) signals to @dataclass to not make that field into a dataclass field, but to pass the data along to __post_init__ as an argument.

In this version of our Book class, we’re not storing condition as a field in the class instance. We’re only using condition during the initialization phase. If we find that condition was set to "Unacceptable", we set shelf_id to None—but we don’t store condition itself in the class instance.

When to use Python dataclasses, and when not to

One common scenario for using dataclasses is to replace the namedtuple. Dataclasses offer the same behaviors and more, and they can be made immutable (as namedtuples are) by simply using @dataclass(frozen=True) as the decorator.

Another possible use case is replacing nested dictionaries (which can be clumsy) with nested instances of dataclasses. If you have a dataclass Library, with a list property of shelves, you could use a dataclass ReadingRoom to populate that list, then add methods to make it easy to access nested items (e.g., a book on a shelf in a particular room).

It’s also important to note, though, that not every Python class needs to be a dataclass. If you’re creating a class mainly to group together a bunch of static methods, rather than as a container for data, you don’t need to make it a dataclass. For instance, a common pattern with parsers is to have a class that takes in an abstract syntax tree, walks the tree, and dispatches calls to different methods in the class based on the node type. Because the parser class has very little data of its own, a dataclass isn’t useful here.

(image/jpeg; 10.71 MB)

Self-propagating worm found in marketplaces for Visual Studio Code extensions 22 Oct 2025, 1:23 am

A month after a self-propagating worm was discovered in the open source NPM code repository, a similar worm has been found targeting Visual Studio Code extensions in open marketplaces.

Researchers at Israel-based Koi Security say the malware, which they dub GlassWorm, has been found in extensions in the OpenVSX and Microsoft VS Code marketplaces.

“This is one of the most sophisticated supply chain attacks we’ve ever analyzed,” the researchers warn. “And it’s spreading right now.”

If the compromised extensions are folded into code, they harvest NPM, GitHub, and Git credentials left by developers in their work, drain funds from 49 cryptocurrency wallets, deploy SOCKS proxy servers on developer computers, install hidden VNX servers for remote access, and use stolen credentials to compromise additional packages and extensions.

Seven OpenVSX extensions were compromised last week and were downloaded over 35,000 times, the report says. In addition, another infected extension was detected in the VS Code marketplace over the weekend.

The worms in the extensions evade detection using an old technique: Including malware written with Unicode variation selectors. These are special characters that are part of the Unicode specification but don’t produce any visual output.

“To a developer doing code review, it looks like blank lines or whitespace,” says Koi Security. “To static analysis tools scanning for suspicious code, it looks like nothing at all.” But to a JavaScript interpreter, it’s executable code.

“CISOs should treat this as an immediate security incident if their developers use VS Code,” says Tanya Janca, head of the Canadian secure coding training consultancy SheHacksPurple.

“Because extensions inherit full VS Code permissions, once installed they can steal credentials, exfiltrate source code, and enable remote command and control (for example, via VNC and SOCKS proxies). Risk level: Very High.”

CISOs should start their incident response processes immediately, she said, conducting an inventory to see which corporate applications use VS Code, which extensions they contain, and determining if any are on the known affected list.

They should also monitor for suspicious application behavior, she added, specifically strange outgoing connections and processes mentioned in the research, unapproved VNC servers, and long-lived SOCKS proxy processes.

Educate your developers

In the meantime, Janca recommends disabling all application auto-updates, and educating all developers about the situation and the extensions to watch for.

“Block access to the OpenVSX registry and all other untrusted/unknown marketplaces, permanently,” she advises. “Have developers log out of their developer tools and reboot. Revoke and then rotate any credentials that might have been spilled before logging back into everything.”

Follow normal practices for incident response, she concluded: Detect, contain, eradicate, recover.

Marketplaces targeted

The Koi Security report is the latest in a series of warnings that threat actors are increasingly targeting VS Code marketplaces in supply chain attacks.  Last week, Koi Security exposed a threat actor dubbed TigerJack spreading malicious extensions. And researchers at Wiz just published research showing the widespread abuse of the OpenVSX and VS Code marketplaces.

The use of Unicode to hide malware was exposed as recently as last month by researchers at Radware, who found it being used to compromise ChatGPT.

These reports should come as no surprise. Open code marketplaces, where developers can upload code for others to use in their applications, have long been targets for threat actors as vehicles for inserting malicious code into projects. The code then spreads into developer or customer environments to steal credentials and data. Collectively, these are known as supply chain attacks.

Among the most targeted repositories are GitHub, GitLab and NPM.

Microsoft gives developers the ability to add extensions and themes to Visual Studio Code to make life easier for developers, as well as to enhance functionality. An extension can add features like debuggers, new languages, or other development tools, while a theme is a type of extension that changes the appearance of the editor, controlling things like colors and fonts.

Leverages blockchain

Koi Security researchers came across the wormed extension in OpenVSX when their risk engine flagged suspicious activity in an update of an extension called CodeJoy. a developer productivity tool with hundreds of downloads. However, version 1.8.3 introduced some suspicious behavioural changes. The source code included what looked like massive gap between lines that was actually malicious code encoded in unprintable Unicode characters that can’t be viewed in a code editor.

Worse, the malware uses the public Solana blockchain as a command and control infrastructure (C2) for its goal of hunting for login credentials, especially those for crypto wallets. The malware also reaches out to a Google Calendar event as a backup C2 mechanism.

The stolen NPM, GitHub, Git, and OpenVSX credentials also help the malware spread as a worm.

Finally, the malware injects a remote access trojan onto the workstations of victim developers, turning them into SOCKS proxy servers. The workstations can then be used to access an organization’s IT systems, becoming internal network access points, persistent backdoors, proxies for attacking other internal systems and data exfiltration channels.

Developers are ‘prime target’

Developers are a prime target for attacks these days, pointed out Johannes Ullrich, dean of research at the SANS Institute. What they often don’t realize is that any extension they install, even if it appears benign, has full access to their code and may make modifications without explicitly informing the developer.

CISOs must include developers in discussions about securing development tools, he advises. Limiting permitted tools is often counterproductive, as developers will identify workarounds to get work done. Security must cooperate with developers to assist them in using the tools they need securely, and any endpoint protection product needs to be tuned to support the unique usage patterns of developers.

This isn’t just a supply-chain problem, said Will Baxter, field CISO at Team Cymru, it’s a new infrastructure layer merging cyber-crime tooling, blockchain resilience, and developer-tooling pivoting. Registry operators, threat researchers and blockchain-monitoring partners need to share intelligence and work together more closely to flag these hybrid attacks, he added.

More advice to CSOs

Janca says to lower the risk of supply chain attacks, security leaders and application security professionals should:

  • reduce attack surface whenever possible: Only install features and other software that they use, for instance, uninstall any VS Code extensions that are not used, and remove all unused dependencies from code;
  • monitor all employee workstations for anomalous behavior, with more focus on those who have privileged access, such as software developers.
  • apply least privilege for identity and access management, especially for developer machines
  • implement a fast and efficient change management process that includes software supply chain changes;
  • train developers on secure coding, protecting their supply chain, and their role during incident response, to help prevent issues like this in the future or to respond faster and more gracefully
    There are various security scanning tools that can be used to reduce risk and catch issues before they become security incidents, such as extension scanners, secret scanners, supply chain security tooling (SCA and SBOM), and endpoint protection;
  • follow proper secret manage best practices, so that malicious packages like these cannot harvest credentials;
  • only approved repositories, marketplaces, etc. should be used in an organizations. Block all unknown or untrusted places for downloading code, packages, images, and extensions;
  • harden the entire software supply chain, not just third-party components and code. This includes regular updates and locking down access to the CI/CD, developer IDEs and workstations, artifacts, and more.
  • push governments to provide a solution to the very insecure open source software ecosystem that so many of us rely on. Or, give preference closed-source development languages and frameworks, though this, she admits, wouldn’t have helped in this case, as .Net is closed source but VS Code Marketplace is not.

(image/jpeg; 9.06 MB)

.NET 10 RC 2 features .NET MAUI, Android updates 21 Oct 2025, 9:17 pm

Microsoft has published a second release candidate (RC) of the planned .NET 10 software development platform, featuring a new microphone permission for .NET MAUI (the multi-platform app UI) and enhancements to the Android mobile platform.

Unveiled October 15, RC 2 is slated to be the final release candidate of .NET 10, following .NET 10 RC 1, announced September 9. Developers can download .NET 10 RC 2 at dotnet.microsoft.com. The second RC comes with a go-live support license. General availability of the production release of .NET 10 is expected November 11 or shortly thereafter.

For microphone permission in .NET MAUI, the latest RC adds the Windows implementation of Permissions.RequestAsync(), used to request and check access permission for the device microphone. .NET MAUI also has improvements to XAML source generation, notably to debug time view inflation. Additionally for .NET MAUI, edge-to-edge support for unsafe areas in the device display has been added to Android via SafeAreaEdges, Microsoft said.

With the Entity Framework (EF) Core in RC 2, fixes have been made for EF complex JSON support, with Entity Framework now allowing the mapping of complex types to JSON. But for other parts of .NET, such as libraries, the runtime, C#, and F#, RC 2 adds no notable feature additions, according to Microsoft. Prior to RC 1 and RC 2, .NET 10 preview editions have featured new capabilities such as an XAML source generator in Preview 7,  improved JIT code generation for struct arguments in Preview 6, and user-defined compound assignment operators for C# 14 in Preview 5. The first preview arrived February 25, emphasizing capabilities across the .NET runtime, SDK, and other areas.

(image/jpeg; 5.08 MB)

Google kills its cookie killer 21 Oct 2025, 1:54 pm

Privacy Sandbox, Google’s attempt to create an alternative to cookies, looks like it has reached the end of the line. The company has announced that it is discontinuing 11 Privacy Sandbox technologies — pretty much the entire gamut. Privacy Sandbox VP Anthony Chavez said in a blog post that the team had reached the decision “after evaluating ecosystem feedback about their expected value and in light of their low levels of adoption.”

Google launched the Privacy Sandbox initiative in August 2019 as a way for advertisers to gain information on users without the need to support third-party cookies. It met much criticism from the industry. Google’s dreams of a cookie-free future took a blow after the UK’s Competition and Market Authority announced that it was instigating an anti-trust investigation as to whether Google was abusing its dominant position in the browser market. It was an initiative that was followed up by a number of US states, each of whom is pursuing its own anti-trust case.

This legal pressure eventually told. In July of last year, Google backtracked on its insistence that Privacy Sandbox would be the dominant privacy technology on Chrome. The CMA gave a guarded response to this and said that it was still seeking further commitments from Google. In April, the company made more concessions and now, this latest move  seems to have satisfied all anti-trust concerns.

However, the lack of take-up of Privacy Sandbox would probably have had more of part in its discontinuation.

Poor adoption

According to Javvad Malik, lead CISO advisor at software company KnowBe4, “Low uptake is a useful signal. It suggests the Sandbox bundle didn’t deliver enough measurable value versus operational complexity and risk. When incentives, tooling, and accountability aren’t aligned, even ‘privacy‑preserving’ tech struggles to gain adoption.”

He said that while he could see why Google would want an alternative to cookies, there was a technical requirement too.  “As for the cookie alternative, something being technically viable isn’t the same as adaptable. Tools need to be built with the users in mind. If there is too much friction with inconsistent APIs and uncertain ROI, many teams will struggle to adopt.

The technologies Google is discontinuing are: Attribution Reporting API for both Chrome and Android,  IP Protection, Protected Audience API for Chrome and Android, Protected App Signals, On Device Personalization, Related Website Sets, Private Aggregation (including Shared Storage),  Select URL, SDK Runtime and Topics for both Chrome and Android.

It is unlikely that the Privacy Sandbox decision will have a marked effect on Google’s dominance of the browser market. Chrome had a 72-percent share of the browser market in September, according to Statcounter. However, it looks likely that Google’s attempts to incorporate Privacy Sandbox technologies in future web standards will be discontinued. As Malik said “Standardisation is possible, but only where there’s multi‑stakeholder legitimacy. Without broad buy‑in from browsers, regulators, and publishers, “web standards” risk looking like vendor standards.”

What is not clear is what will happen to those companies that have been implementing Privacy Sandbox technologies within their own organizations.

A Google spokesman said, “We will be continuing our work to improve privacy across Chrome, Android and the web, but moving away from the Privacy Sandbox branding. We’re grateful to everyone who contributed to this initiative, and will continue to collaborate with the industry to develop and advance platform technologies that help support a healthy and thriving web.”

This article first appeared on CSO.

(image/jpeg; 17.06 MB)

Should you jump for a neocloud? 21 Oct 2025, 9:00 am

The cloud computing industry is experiencing a seismic shift that is steadily gaining momentum. The “neocloud” is beginning to dominate conversations about the future of digital infrastructure because this new breed of cloud platform is specifically designed for artificial intelligence workloads. Will this evolution challenge traditional cloud providers like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud?

Neoclouds, with their highly specialized focus, reduce inefficiencies and the general-purpose bloat that is often associated with traditional hyperscale cloud providers. These AI-centric clouds use advanced GPU-based infrastructure with a strong emphasis on optimizing costs and performance for AI and machine learning tasks. By meeting the increasing demand for AI compute and lowering costs through a streamlined infrastructure, they pose a threat to the dominance of the big three providers.

While their purpose-built design gives them an advantage for AI workloads, neoclouds also bring complexities and trade-offs. Enterprises need to understand where these platforms excel and plan how to integrate them most effectively into broader cloud strategies. Let’s explore why this buzzword demands your attention and how to stay ahead in this new era of cloud computing.

A highly strategic innovation

What makes neoclouds unique? Basically, they are built to handle the vast computing power needed for generative AI models, deep learning tasks, and other demanding applications. Generative AI itself has revolutionized the tech world, from natural language processing to generative design in manufacturing. These tasks depend on graphics processing units (GPUs), which are far better than traditional CPUs at managing parallel processing and large data calculations.

Traditional cloud providers typically offer a multipurpose infrastructure model designed to support a wide array of workloads across industries. While this flexibility makes them versatile and essential for most enterprises, it also leads to inefficiencies in AI workloads. AI requires unprecedented levels of raw processing power and high-capacity data management, capabilities that aren’t always cost-effective or seamlessly available on platforms designed for more general uses.

By contrast, neoclouds are hyper-focused on delivering specialized services such as GPU as a service (GPUaaS), optimized generative AI infrastructure, and high-performance compute environments at a lower cost. By removing the general-purpose ecosystem and focusing specifically on AI workloads, neocloud providers CoreWeave, Lambda, OpenAI, and others are establishing an important niche.

Cost savings are a core part of the value proposition. Enterprises that invest heavily in generative AI and machine learning often face ballooning infrastructure costs as they scale. Neoclouds alleviate this pain point with optimized GPU services and streamlined infrastructure, allowing companies to scale AI applications without running up exorbitant bills.

Neoclouds challenge the big three

Neoclouds represent a generational shift that threatens to erode the market share of AWS, Microsoft Azure, Google Cloud, and other hyperscalers. The big players are investing in GPU-centric services for AI workloads, but their general-purpose design inherently limits how far they can specialize. Hyperscale cloud providers support workloads ranging from legacy enterprise applications to emerging technologies like Internet of Things. However, this breadth creates complexity and inefficiencies when it comes to serving AI-first users.

Neoclouds, unburdened by the need to support everything, are outpacing hyperscalers in areas like agility, pricing, and speed of deployment for AI workloads. A shortage of GPUs and data center capacity also benefits neocloud providers, which are smaller and nimbler, allowing them to scale quickly and meet growing demand more effectively. This agility has made them increasingly attractive to AI researchers, startups, and enterprises transitioning to AI-powered technologies.

Plans, architecture, and test deployments

For organizations eager to embrace the potential of AI, neoclouds represent an opportunity to optimize AI architecture while potentially lowering costs. But jumping headlong into a neocloud strategy without adequate preparation could create risks. To truly capitalize on this emerging market, enterprises should focus on planning, architecture, and test deployments.

Planning for AI-specific workloads involves assessing current and future AI initiatives, identifying workloads that would benefit most from a specialized GPU-based infrastructure, and estimating expected growth in these computing needs. Having a clear understanding of generative AI use cases is critical at this stage. Whether it’s deploying advanced natural language models, bolstering interview analytics with computer vision, or enabling predictive analytics in logistics, clarity in business use cases will guide the choice of infrastructure.

Next, enterprises need to rethink their cloud architecture. Leveraging neoclouds alongside more traditional hyperscalers could result in a hybrid or multicloud strategy, which forces new architecture requirements. Organizations should prioritize modular and containerized designs that enable workloads to move easily between platforms. Developing efficient pipeline and orchestration strategies is also key to ensuring that AI workloads on neoclouds integrate seamlessly with other systems hosted on legacy enterprise or public cloud environments.

Finally, run pilot or test deployments to validate performance and cost claims. Neocloud providers often offer proof-of-concept opportunities or trial periods to demonstrate their platform’s capabilities. Enterprises should use these options to evaluate performance metrics such as model training times, data throughput, and GPU utilization rates. These test deployments will help fine-tune your strategy and ensure you are ready for a larger rollout.

Neoclouds disrupt cloud computing

Neoclouds are transforming cloud computing by offering purpose-built, cost-effective infrastructure for AI workloads. Their price advantages will challenge traditional cloud providers’ market share, reshape the industry, and change enterprise perceptions, fueled by their expected rapid growth.

As enterprises find themselves at the crossroads of innovation and infrastructure, they must carefully assess how neoclouds can fit into their broader architectural strategies. The transition won’t happen overnight, but by prioritizing AI workload planning, adjusting cloud architectures for hybrid approaches, and testing platforms like GPUaaS, businesses can better position themselves for the evolving cloud economy.

In short, understanding and preparing for the neocloud moment is no longer optional. Enterprises that adapt will not only optimize their AI capabilities but also stay competitive in a market increasingly shaped by intelligence-led growth. As neoclouds continue their rise, the question for enterprises won’t be should they embrace these platforms, but when and how.

(image/jpeg; 9.49 MB)

How to improve technical documentation with generative AI 21 Oct 2025, 9:00 am

Devops teams have a love-hate relationship with writing and consuming technical documentation. Developers loathe reading and maintaining undocumented code. Architecture diagrams can tell a great story, but much of it is fiction compared to the implemented architecture. Even IT service management (ITSM) process flows for incident, request, and change management are rarely followed as specified in the documentation.

CIOs, CTOs, and other digital trailblazers insist on documentation. However, project budgets rarely include technical writers, and agile teams rarely have time to do more than code-level documentation, README files, and other basics.

While product owners capture requirements in agile user stories, the documentation guiding an application’s business rules, journey maps, architecture, APIs, and standard operating procedures is rarely complete and up-to-date.

I’ve previously written about using generative AI to write requirements and agile user stories. Now, I am asking the follow-up question: How can developers, engineers, and architects use genAI tools to write and maintain accurate documentation?

Using genAI for devops and ITSM documentation

Several years ago, I worked with a development team that said documentation was worthless. They believed good code that followed naming conventions and had robust unit testing, with high code coverage and strong error handling, was all the documentation they needed. Even if they dedicated more time to documenting existing features (at the expense of developing new ones), the materials would become obsolete with the next deployment’s changes, they said.

But proponents say using generative AI tools can help devops teams maintain documentation at the pace of code changes and deployments. “Generative AI is shifting the role of software documentation from static reference material to a dynamic layer within the product experience,” says Erik Troan, CTO of Pendo. By capturing user flows and generating contextual guidance automatically, we’re seeing how documentation can now evolve in real time alongside the software itself to reduce friction and improve user efficiency.”

Dominick Profico, CTO at Bridgenext, believes AI-generated knowledge will eventually replace documentation altogether. “GenAI will allow us to reach the point where documentation that leaders have been craving and developers have been avoiding for decades will become obsolete. LLMs will reach a level where documentation is truly dynamic, generated on the fly in response to a question, a chat, or a prompt, and extracted from the model’s knowledge of the codebase, industry standards, documentation, and even support tickets and system logs.”

Regardless of what the future brings, genAI is already making it possible for devops teams to address documentation requirements more efficiently. Additionally, there are new reasons to invest in documentation, as devops teams utilize AI agents and other genAI development tools.

Define the target audience for your documentation

Before investing in documentation, it’s a best practice to define the audience and how they will use the materials. Let this be the baseline standard for defining “good enough” and “up-to-date” documentation. Here are the major audiences and needs devops teams should consider:

  • Newly onboarded developers seek documentation that covers the architecture, devops non-negotiable requirements, the software development process, and high-level code structure to become productive quickly and produce acceptable solutions according to standards.
  • External development teams want to review an API’s documentation, README files in Git repositories, data definitions in data catalogs, and guides on log files and other observability artifacts.
  • Architects, security specialists, and site reliability engineers (SREs) require documentation when recommending app modernizations and programs to address technical debt. They also need documentation to aid in incident response and perform root-cause analysis.
  • Data scientists, data governance specialists, and engineers working on data pipelines are often consuming data created by APIs and applications to be used in reports, data visualizations, analytics, and AI models. They’ll want to see the updated data catalog and to understand data lineage, which they’ll use for data-driven decision making.
  • Product managers, product owners, change leaders, and other subject matter experts (SMEs) want to know “how the system works.” While they want to avoid diving into code, they need more detail than what’s typically provided in release notes.
  • Auditors for ISO 27001, ISO 9001, SSDF, CMMI, SOC 2, and other compliance standards will want to review the required documentation.
  • GenAI coding assistants and AI agents will consume documentation to improve their relevance and accuracy.

How to use genAI tools for technical documentation

We know different audiences have different documentation needs. How can we use generative AI tools to meet these needs in targeted ways?

Document how features work

“The docs for Google Cloud’s API’s are written in code, and they proved that the only way to keep literally tens of thousands of API docs up to date was to reduce that effort to automation,” says Miles Ward, CTO at SADA. “We dumped our technical documentation into NotebookLM, now I can get a podcast explaining the nuance of feature interactions to me in plain English. The state of the art is changing rapidly, and new tools like Gemini, NotebookLM, and Mariner can help customers get their documentation to be an asset, rather than a chore.”

To document how features work, consider writing and maintaining the following:

  • A feature specification documenting requirements, including end-user documentation.
  • A short technical design that includes architecture, dependencies, testing, security, configuration, and deployment sections.
  • References, including links to agile user stories and IT service management tickets.

Some tools for functional-level documentation include Microsoft Teams, Atlassian Confluence, Google Workspace, Notion, and MediaWiki.

Document APIs, data dictionaries, and data pipelines

“One of the most exciting shifts we’re seeing with genAI in the CTO office is how it transforms documentation from an afterthought into a natural byproduct of the development process itself,” says Armando Franco, director of technology modernizations at TEKsystems. “For example, as teams build microservices, genAI can automatically produce and maintain OpenAPI specifications that accurately reflect endpoints, payloads, and authentication methods. For data teams, AI can generate lineage diagrams and data catalogs directly from SQL code and ETL pipelines, ensuring consistency across environments.”

Devops teams should remember that they are not the target audience of technical documentation. Developers who join the program or take over where the original development team leaves off are the primary audience, along with any external developers who utilize APIs or other externalized capabilities.

Different kinds of technical documentation leverage different tools:

  • The best place to document data dictionaries is in data catalogs such as Alation, Atlan, Ataccama, AWS Glue Data Catalog, Azure Data Catalog, Collibra, Data.world, Erwin Data Catalog, Google Dataplex Universal Catalog, Informatica Enterprise Data Catalog, and Secoda.
  • Dataops teams using data pipelines, data integration platforms, or other integration platforms can use visual design tools to provide data flow and lineage diagrams.
  • Tools for documenting APIs include Postman, Redocly, Swagger, and Stoplight.

Document the runtime and standard operating procedures

“Traditional documentation practices haven’t kept pace with the dynamic, real-time nature of today’s AI-driven cloud systems,” says Kevin Cochrane, CMO at Vultr. “CTOs are now using genAI tools to turn logs, configs, and runtime data into living documentation that evolves with the system, helping teams reduce friction and accelerate development. This approach turns documentation into a continuity tool: preserving shared context, reducing single points of failure, and preventing execution breakdowns across the stack.”

Devops best practices focus on workflow, tools, and configuration, leaving it to teams to decide how to document handoffs from development to operational functions. The following types of tools can help address the gaps:

  • Tools for creating operational knowledge bases and standard operating procedures: Atlassian Jira Service Manager, Freshservice Knowledge Base, ServiceNow Knowledge Management, and Zendesk Guide.
  • Log-file analysis tools with AI: Datadog, Dynatrace, LogicMonitor, Logz.io, New Relic, Splunk, and Sumo Logic.
  • Tools for visualizing public cloud infrastructure: Cloudcraft, Hava, and Lucidscale.
  • Tools to diagram the architecture, sequences, and other flows: Draw.io, Figma, Eraser, Lucidchart, Miro, and Visio.

Provide AI agents with documentation they can use

While many code-generating AI agents analyze the codebase, a growing number of them can also analyze software documentation for added context.

“When every code change is documented, AI agents can understand not just what the code does, but why it was written that way, and this historical context transforms AI from a coding assistant to a knowledgeable team member,” says Andrew Filev, CEO and founder of Zencoder. “This institutional knowledge, previously locked in developers’ heads or scattered across Slack threads, becomes searchable, actionable intelligence that improves every subsequent AI interaction.”

Devops teams should consider feeding AI code-generators documentation on APIs, user stories with acceptance criteria, coding standards, architecture principles, README files, secure coding guidelines, data privacy rules, and compliance references.

Filev adds, “LLMs work three times better with detailed documentation because they can understand context, constraints, and intentions. Teams using this approach report that after six months, their AI agents become dramatically more effective at understanding their specific codebase patterns and conventions.”

Document legacy applications

One additional use case is to address undocumented applications, especially when the original developers are no longer with the organization. Sanjay Gidwani, COO of Copado, shared three key AI capabilities that make documenting existing systems easier:

  • GenAI is great at summarizing vast amounts of material, so it can easily read existing source code and summarize the intent.
  • Many business application systems rely on configuration metadata, and AI with metadata awareness can read configurations and document them.
  • AI can analyze your data to determine the actual processes used, complete with the length of time in various stages and the identity of the participants.

While undocumented systems are problematic and may be a compliance issue, creating overly verbose documentation is also challenging. Long-form documentation is hard for humans to consume and expensive to maintain, even when assisted by generative AI. The best approach is to keep your audience in mind and maintain just enough documentation. All documentation should be targeted for the people who will review it and the LLMs that will use it to answer their questions.

(image/jpeg; 8.41 MB)

Anthropic extends Claude Code to browsers 21 Oct 2025, 1:34 am

Anthropic has launched Claude Code on the web, enabling developers to use the company’s AI coding assistant directly from their browser or smartphone, with no terminal required.

Developers can connect a first GitHub repository to Claude Code on the web at claude.com/code.

Claude Code on the web was launched as a beta research preview for Pro and Max users on October 20. Through Claude Code on the web, developers can assign multiple coding tasks to Claude that run on Anthropic-managed cloud infrastructure. Running tasks in the cloud is especially effective for answering questions about how projects work and how repositories are mapped, for routine and well-defined tasks, for bug fixes, and for back-end changes where Claude Code can use test-driven development for change verification, according to Anthropic.

Claude Code on the web lets users start coding sessions without opening their terminal. Each session runs in its own isolated sandbox with network and file restrictions and real-time progress tracking. Git interactions are handled via a secure proxy service. Developers can run tasks in parallel across different repositories from a single interface and ship more quickly with automatic pull request creation and clear change summaries, Anthropic said.

In addition to web browsers, the research preview makes Claude Code available on the Anthropic’s iOS app. The company hopes to quickly refine the mobile experience based on user feedback.

(image/jpeg; 2.12 MB)

Visual Studio Code taps AI for merge conflict resolution 21 Oct 2025, 12:27 am

Visual Studio Code 1.105, the latest release of Microsoft’s popular Visual Studio Code editor, introduces several new AI coding features, including the ability to resolve merge conflicts with AI assistance, the ability to resume recent chat sessions, and the ability to install Model Context Protocol (MCP) servers from the MCP marketplace.

Announced October 9, VS Code 1.105, aka the September 2025 release, can be downloaded from code.visualstudio.com for Windows, Mac, and Linux.

Developers with VS Code 1.105 now are able to resolve merge conflicts with AI assistance, when opening a file with git merge conflict markers. This feature is available via a new action in the lower right-hand corner of the editor. Selecting this action opens the Chat view and begins an agentic flow with the merge base and changes from each branch as context.

VS Code 1.105 also introduces a built-in MCP marketplace that allows users to browse and install MCP servers from the Extensions view. This is enabled by the GitHub MCP Registry, providing an experience for discovering and managing MCP servers directly within the editor.

Another AI-related enhancement in the new VS Code release is support for fully qualified tool names in prompt files and chat modes, which aims to avoid naming conflicts between built-in tools and tools provided by MCP servers or extensions. Tool names are now qualified by the MCP server, extension, or tool set they are part of, Microsoft said. Developers can still use the previous notation, but a code action is available to help users migrate to the new names.

VS Code 1.105 follows the release of VS Code 1.104, which introduced automatic model selection for chat as well as enhanced agent security.

Other features highlighted in VS Code 1.105:

  • With the introduction of a GPT-5-Codex, thinking tokens are now shown in chat as expandable sections in the response. The ability to examine the model’s chain of thought can be useful for debugging or understanding suggestions the model provides, Microsoft said.
  • The set of edit tools for Bring Your Own Key (BYOK) custom models have been improved for better integration with VS Code’s built-in tools.
  • A new setting, accessibility.verboseChatProgressUpdates, enables more detailed announcements for screen reader users about chat activity.
  • To quickly navigate through chat prompts in the chat session, keyboard shortcuts have been added for navigating up and down through chat messages.
  • When working in a repository that has the Copilot coding agent enabled, the “Delegate to coding agent” button in the Chat view now appears by default.
  • The runTests tool now reports test code coverage to the agent. This enables the agent to generate and verify tests that cover the entirety of code.
  • MacOS native broker support for Microsoft Authentication now allows developers on MacOS devices now are able to sign in through a native experience. This applies only to M-series MacOS devices that are Intune-enrolled, and is done using the Microsoft Authentication Library (MSAL).

(image/jpeg; 10.56 MB)

AWS DNS error hits DynamoDB, causing problems for multiple services and customers 20 Oct 2025, 11:04 am

Monday got off to a bad start for Amazon Web Services users served by the company’s US-EAST-1 region, when a DNS problem rendered the DynamoDB API unreliable, with consequences for many AWS services and customers.

Although the root cause of the incident apparently affected a single API in just one of many AWS cloud regions, it provided a key database service on which many services — Amazon’s own and those of its customers — were built, in that and other regions.

AI search company Perplexity was one of those affected by the incident, reporting that it was “experiencing an outage related to an AWS operational issue”. And although online design tool Canva didn’t name AWS as the source of its problems, it reported a major issue with its underlying cloud provider resulting in increased error rates for its users during the same time window.

Real-time monitoring service Downdetector noted that outages at Venmo, Roku, Lyft, Zoom, and the McDonald’s app were “possibly related to issues at Amazon Web Service.”

Increased error rates

AWS itself first reported the incident on its service health status page at 12:11 a.m. Pacific time, saying, “We are investigating increased error rates and latencies for multiple AWS services in the US-EAST-1 Region,”

A little over an hour later, it had narrowed the problem down to the DynamoDB endpoint, which it said was also affecting the other services, and half an hour after that, the company reported: “Based on our investigation, the issue appears to be related to DNS resolution of the DynamoDB API endpoint in US-EAST-1. We are working on multiple parallel paths to accelerate recovering.”

By this time, it was clear that the problems were not confined to users or services on the US East Coast.

“Global services or features that rely on US-EAST-1 endpoints such as IAM updates and DynamoDB Global tables may also be experiencing issues,” it said.

By 2:27 a.m. Pacific time, a little more than two hours after it began investigating the incident, the company reported that it had applied initial mitigations and recommended customers retry failed requests, warning that there may be additional latency as some services had a backlog to work through.

Three hours after it began its investigation, the company reported that global services and features reliant on US-EAST-1 had recovered and promised further updates when it had more information to share.

Cloud dependencies

While this outage was quickly fixed, it shows that even in the cloud there are single points of failure that can have worldwide consequences.

A few months ago, it was Microsoft with egg on its face, as a problem in Azure’s US East region rippled out to affect other organizations. Before that, a series of outages at IBM Cloud had customers wondering if they had made the right design choices. The third, shorter, outage affected 54 IBM Cloud services.

This article was first published on Network World.

(image/jpeg; 35.72 MB)

Page processed in 0.374 seconds.

Powered by SimplePie 1.4-dev, Build 20170403172323. Run the SimplePie Compatibility Test. SimplePie is © 2004–2025, Ryan Parman and Geoffrey Sneddon, and licensed under the BSD License.