Can the offline feature store use any Hadoop compatible storage?

I saw a video by Jim Dowling where he states that the managed cloud version of Hopsworks writes Hudi tables to S3 (or a similar Hudi-compatible cloud object store I presume), and the community version writes to local disk. That video is here: Hopsworks Live Coding: Installing Hopsworks Open Source - YouTube

I’m not sure whether that’s just the default, or whether it’s mandatory for some reason. So, my question is: Can a Hopsworks instance be configured to use any Hadoop-compatible storage? And does this answer change if it’s the community, managed, or Kubernetes version?

For example, if I run a big on-prem HDFS cluster for my data already, can I point an instance of Hopsworks to use a subdirectory of that filesystem as its offline store?

Also, if it’s possible to do this, is there anyone out there who’s configured it this way in production? I’d love to hear some stories.

I am also curious to know the following -
What are the requirements for using Cloudera Hadoop Cluster as a source of feature data and Hopsworks featurestore ? Any guidelines / examples will be highly appreciated. Where to find the hadoop versions supported by hopsworks ?

To be more specific, can we install Hopsworks inside existing Cloudera hadoop environment ( spark version - 2.4 , cloudera - cdp 7.1.3 )

Thanks