using hsfs on my local machine I created a feature group and save a dataframe with the following code:
fg_target = fs.create_feature_group( name="target", version=1, description="customerID, Churn", online_enabled=False, time_travel_format=None, primary_key=['customerID'], statistics_config=None fg_target.save(data_prep.target_table)
The PySpark Job is successfully created, however it fails after ca. 30 seconds.
Similarly, PySpark Jupyter notebook crashes just after executing “import hsfs” command:
What could be the source of this problem? The Hopsworks version we use is 2.2, hsfs library 2.2.21.