Error tablespace full

Hi,
while using the Hopsworks version 1.0 platform deployed on 5 nodes with ten Jobs Spark active, we encounter an error on the log file of the HopsFS “tablespace full” which determines the fall of the Hadoop services.

Error detail is:
mysqlCode 135, state 2, classification 6, message Out of extent, tablespace full.

The mysql cluster is composed of 3 NDB nodes running on the 3 datanodes.

Have you ever encountered this error? Can you give me some suggestions on how to solve the problem?

Thanks,
Antony

Hi Antony,

HopsFS is storing really small file in the database, in tables backed by filesystem table spaces for increased performance.

First you will need to stop the NameNode to avoid possible corruption sudo systemctl stop namenode. Next step is to increase the tablespace size of NDB. We need to identify the name of the tablespace, normally it should be ts_1 but we can check with querying MySQL.

Run the MySQL client /srv/hops/mysql-cluster/ndb/scripts/mysql-client.sh and execute

SELECT
              FILE_NAME AS File, FILE_TYPE AS Type,
              TABLESPACE_NAME AS Tablespace, TABLE_NAME AS Name,
              LOGFILE_GROUP_NAME AS 'File group',
              FREE_EXTENTS AS Free, TOTAL_EXTENTS AS Total
          FROM INFORMATION_SCHEMA.FILES
          WHERE ENGINE='ndbcluster';

Assuming the name of the table space is ts_1, we will add a new Datafile to this tablespace - 1Gb size - by executing, ALTER TABLESPACE ts_1 ADD DATAFILE 'ts_1_data_file_1.dat' INITIAL_SIZE = 1G ENGINE = NDBCLUSTER;

Then you can restart the NameNode - systemctl restart namenode - and it should come back. It’s advised to restart all other services.

Kind regards
``

Hi Antonios,
Thank you for your reply.

This solution was correct and works for me.

Thanks really a lot,
Kind regard,
Antony