Hi Team,
We are trying to push a pandas dataframe from our local machine to the Feature Store in hopswork.ai, does it require any spark cluster in the local environment or we just need the hsfs[hive] library in python?
Also, we are able to retrieve the data from the feature but when trying to load the data into feature store ending up with following error “Engine’ object has no attribute 'convert_to_default_dataframe”, if we change the library version it gives some different error
not sure what is going wrong here? but we can able to store the data from csv to feature store inside hopswork.ai with the same steps used in the local jupyter book.
hopswork.ai version 2.0.0
hsfs[hive] library version 2.0.12
Thanks,
Guru