NDB Cluster and Max_rows

Just before Christmas, I was working with a client that needed to insert more than a billion rows into a NDB table. The cluster was big,
14 nodes, and after a few hundred millions rows, we got the following error:

2008-12-09 18:28:52 [ndbd] INFO     -- dbacc/DbaccMain.cpp
2008-12-09 18:28:52 [ndbd] INFO     -- DBACC (Line: 5274) 0x0000000e
2008-12-09 18:28:52 [ndbd] INFO     -- Error handler shutting down system
2008-12-09 18:28:53 [ndbd] INFO     -- Error handler shutdown completed - exiting
2008-12-09 18:28:56 [ndbd] ALERT    -- Node 10: Forced node shutdown completed. Caused by error 2304: 'Array index out of range(Internal error, programming error or missing error message,

which looks like a bug in the NDB kernel.  In fact, it is not really a bug.  This error is the equivalent of a “table too large” error
with MyISAM when the row pointer size is too small.  With MyISAM, you can use the “max_rows” specifier in the create table statement
to hint MySQL about the size of the pointer to use.  The same concept apply with NDB, we created the table with a “max_rows” like:

TablePK bigint(20) NOT NULL DEFAULT '0',
TableData` varbinary(440) DEFAULT NULL,
) ENGINE=ndbcluster DEFAULT CHARSET=latin1 max_rows=3000000000;

and it solved the issue.

About Yves Trudeau

I work as a senior consultant in the MySQL professional services team at Sun. My main areas of expertise are DRBD/Heartbeat and NDB Cluster. I am also involved in the WaffleGrid project.
This entry was posted in mysql, NDB Cluster, yves. Bookmark the permalink.

3 Responses to NDB Cluster and Max_rows

  1. Jonas says:

    ehh…i actually think it’s 2 bugs
    1) it crashes the ndbd instead of reporting
    that it’s full
    2) the parameter is needed in the first place

    but it definitely has a good work-around

  2. does it do the same thing if you set a max_rows of a lot smaller?

    we should also probably fix that bug….

  3. Yves Trudeau says:

    I have not tried but I believe so.