Single VM Installation error in ubuntu 18.04

I’m trying to install feature store on a single VM and could see failure during installation of chefdk.

  1. | install chefdk | FAILED | retry skip log| 31227

Errors observed in the logs.

Command did not complete: set -eo pipefail; mkdir -p /home/gaia/.karamel/install ; cd /home/gaia/.karamel/install; echo $$ > pid; echo '#!/bin/bash

Can you help me out here , if I’m missing something ?

Also if i want to try only the feature store , is there any procedure that i can follow to skip other components that are not required.


The issue is resolved after i enabled sudo without password for the user running the script. I might need help in the second query.

If i want to try only the feature store , is there any procedure that i can follow to skip other components that are not required during installation.


Glad to hear that you were able to resolve your first issue. Installing the feature store only is currently not possible as it relies on the other components. If you are looking for a quicker way for trying out the feature store, you could give a try which includes a 30 days demo.

Thanks for the quick response.

Can you please help me with the port for administration GUI.

I’m able to access the karamel GUI in port 9090.

Please note I have installed it in a Single node Virtual machine.

Hopsworks should be available on port 80 and 443 on the VM that you installed it to. The admin interface can be found like this:

Thanks ,
Accessing the feature store programmatically is only possible in enterprise version ?
I followed the integration guide using python and my requests fails with SSL: CERTIFICATE_VERIFY_FAILED errors in a local setup.

I also tried the 30 days demo , I think it requires AWS account to be configured to expose the feature store for outside integration.

I’m just trying to understand the feature store , if it can be integrated with the tools that we already have in place.

As you deployed the cluster yourself, your SSL certificates are not signed. Try setting hostname_verification=False in the connect call.

The 30 days demo does not require an AWS account. It’s an instance entirely hosted by us and the relevant ports are open to the Internet.

Is it possible to change the default port 443 of admin interface without re-installing the entire components .

I could see the ports configurations in cluster-defns/ folder but they get overriden every time we run the installation.

There is unfortunately not straight forward solution for this as many of the internal services would need to be reconfigured. If you are able to listen on port 443, then I’d recommend to try port forwarding with iptables.

Okay thanks .

The demo trial that I’m using in is running into issues , can I get some help on that regard.

When I try to create a project it throws an error “Could not create Hive DB for project”

And when I try to launch Jupyterlab from the project that is already created it throws an error “A generic error occurred.”.

Thanks for reporting. We identified an issue with the cluster and fixed it. Let us know if you encounter any more issues.

Is there a way we can allocate resources based on projects ? Make sure Jobs from one project does not take more than allocated resources (compute & storage) ?

You can restrict how much storage and compute a project can use. You can find instructions on how to allocate quota for a project here: