NVIDIA CEO: AI-Generated Textures and NPCs via DLSS Are the Future
According to More Than Moore, NVIDIA founder and CEO Jensen Huang recently addressed a question about DLSS at COMPUTEX 2024, stating that in the future, we will witness textures and objects generated purely through artificial intelligence (AI). Additionally, AI NPCs will also be entirely generated via DLSS.
Utilizing DLSS technology enhances the gaming performance of Geforce RTX graphics cards by offloading some tasks to the Tensor Cores, thereby reducing the demand on CUDA cores, freeing up resources, and increasing frame rates. Jensen Huang mentioned that DLSS can now generate textures and objects, which not only improves object quality but also enhances overall visual fidelity.
In a sense, this represents the next iteration of DLSS technology. It is reported that NVIDIA is developing a new texture compression technology that employs trained AI neural networks to significantly enhance texture quality while maintaining similar memory requirements. Traditional texture compression methods limit the compression ratio to 8x, but NVIDIA’s new neural network-based compression technique can achieve a compression ratio of up to 16x. Intriguingly, DLSS technology can create in-game objects from scratch, marking a significant step beyond the current DLSS framework. However, it requires specifying the placement and rendering content of these objects within the game world.
Huang envisions that DLSS technology will not only generate in-game objects but also NPCs. Last year, NVIDIA established a new research group called “GEAR (Generalist Embodied Agent Research)” aimed at further advancing and developing AI-based entities capable of skillfully operating in both virtual and physical worlds, such as intelligent robots and NPCs.