On-premise installation with ROCm-enabled GPUs?

Dear ‘hopsworks.ai’ community,

how do I install/configure hopsworks (2.4?) that can utilize ROCm-enabled (AMD) GPUs? Logical Clock’s demo presentation(s) claim it’s supported - and I see some AMD/ROCm-related business in Karamel scripts, but how does it all hang together? Also, I noticed that the hopsworks’ “officially endorsed” Ubuntu 18.04 uses, by default, kernel that’s too old wrt what ROCm requires to work “out-of-box”.

Any specific pointers to continue digging, at very least, please?


— Konstantin.

Hi @kostja,

we currently don’t support ROCm (AMD) GPUs anymore. The existing support is only for NVIDIA GPUs: Custom Installation — Documentation 2.4 documentation

Best regards,

Dear Robin,

… got that, thanks for the message! I am surprised with the ‘anymore’ part, though - the ‘ROCm and Distributed Deep Learning …’ youtube is from May 9, 2019.

I wonder was/is there any particular reason for the decision, and/or whether there are any plans to regain that support?


— Konstantin.