3_model_training; predict step crashes

---------------------------------------------------------------------------

RestAPIError                              Traceback (most recent call last)

<ipython-input-30-b1e9947f2982> in <module>
      3 }
      4 
----> 5 deployment.predict(data)

5 frames

/usr/local/lib/python3.7/dist-packages/hsml/client/base.py in _send_request(self, method, path_params, query_params, headers, data, stream, files)
    106 
    107         if response.status_code // 100 != 2:
--> 108             raise exceptions.RestAPIError(url, response)
    109 
    110         if stream:

RestAPIError: Metadata operation error: (url: http://afbae69b206bd4749b1c4ca8fc3d6dd4-96222519.us-east-2.elb.amazonaws.com/v1/models/fraudonlinemodeldeployment:predict). Server response: 
HTTP code: 500, HTTP reason: Internal Server Error

 Check the model server logs by using `.get_logs()`

!bild|690x458

Entry is expected to be single value per primary key. If you have already initialised prepared statements for single vector and now want to retrieve batch vector please reinitialise prepared statements with training_dataset.init_prepared_statement() or feature_view.init_serving()

Also, on this forum: “Sorry, new users can only put one embedded item per post”

Hi @haf,

Thanks for reporting this. Indeed the example is wrong and it should use instances instead of inputs. We are fixing the example, in the mean time you can try to run a prediction by using this snippet:

data = {
    "instances": model.input_example
}

deployment.predict(data)

Regards,


Fabio

Hi Fabio,

Ok, that gets me the same output, but different logs:


Instance name: fraudonlinemodeldeployment-predictor-default-00001-deploymxds78
[I 221024 10:30:22 kserve-component-server:95] Initializing predictor for deployment: fraudonlinemodeldeployment
DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
[I 221024 10:30:30 kfserver:150] Registering model: fraudonlinemodeldeployment
[I 221024 10:30:30 kfserver:120] Setting asyncio max_workers as 12
[I 221024 10:30:30 kfserver:127] Listening on port 8080
[I 221024 10:30:30 kfserver:129] Will fork 1 workers
[E 221024 10:30:30 web:2246] 500 POST /v1/models/fraudonlinemodeldeployment:predict (127.0.0.1) 22.59ms

What’s happening here? Could you look into it?

Screenshot:

Hi @haf ,

Can you make sure you have data on the online feature groups? In the UI check the data preview section in the feature groups overview. I’m asking because during the weekend we had some overload issues on app.hopsworks.ai and it could be that your data was not ingested correctly.

If that’s the case, you can run again the first notebook to write data on the online feature groups before doing the prediction.


Fabio

Hi

No that did not solve it. perhaps you could look into it on your side? The imports work well.

Kind regards
Henrik