The standard way of accessing common code and services is an API – the application programming interface.
APIs tend to become a boat-anchor in code, once you use one you are dependent on its particular way of doing things. They are opaque portals.
Languages describe how to do something, e.g. regular style linear (control flow) programming in C/C++, or what hardware is supposed to do in Verilog or VHDL (RTL).
While either side of an API might evolve – new hardware, bigger machines – APIs themselves tend not to. Linux is essentially an operating system rewritten to match the APIs of Unix (something from the 1970s). On the other hand a piece of Verilog-RTL written in the 1990s can be synthesized in new ways in 2020 to perform better.
APIs work best for describing static things like immutable blocks of hardware, as with driver interfaces for particular ICs. Almost everything else is best described in languages that describe higher level intent that can be compiled efficiently into whatever new hardware comes along. This is probably most obvious in the GPU business where there is no standard “graphics language” for describing games, and the game designers work to various levels of OpenGL API, with the consequence that GPU designers then spend their time trying to support legacy APIs.
One fix for the API boat-anchor problem is to use open-source code across the API so that the compilers can optimize it away, but users rarely want to provide their graphics software in that form. RenderScript is an attempt to optimize on the hardware side by delaying compilation.
So the alternative to an API is a DSL (domain specific language). With a DSL you describe what it is you are trying to do down into primitive elements that the DSL compiler understands. The primitive elements will have methods but their implementation is not defined; with RTL the final target can be just simulation, FPGA or ASIC and the language does not need to change.
In a similar vein CUDA is more API than language, so it does not separate well from its block of hardware (NVIDIA GPUs).
What brought me to the conclusion that languages are better than APIs is that if you want to do hardware/software co-design you need languages that work from high-level DSLs describing purpose down to low-level DSLs that describe hardware, so that the boundaries are fluid between software and hardware, and intent is not lost along the way. A large GPU project I worked on suffered from not having that approach, and left engineers mostly trying to do assembly-level hardware design (in Verilog-RTL) to support ancient APIs. A correct-by-construction hardware compiler approach (generating a stream-processing machine) would have been a little harder to do, but would be infinitely more reusable and verifiable.
OneAPI from Intel takes it to the limit (going by the name), going for the lowest common denominator almost guarantees poor performance. That said, in the world of AI and the lowest level of hardware, if you can assemble from standard (looking) neurons, maybe that works, but you then have to add a language for describing the neural networks..