Yes, creating a new version does not automatically migrate the data, however you can still read the data from the previous version. That’s by design. The rationale is that a new version means that there is a breaking change the feature group (e.g. you dropped a feature or changed completely the meaning of a feature), so we can’t automatically migrate the feature group data between version. Users have to write a Spark job that reads from the previous version, apply the transformations and write the dataframe in the new version.
From an implementation point of view, a new version means a new Hive table which will be initially empty.
Unfortunately, as I was mentioning above, we currently treat “adding a new column” as a breaking change. But that’s going to be fixed with the new APIs.