(by noticable I mean around 750 benchmark queries per second slower)).
Since you only give the change, not the total, that is hard to put into context. eg if the total is 750,000 then it is within the margin of error.
The main question is, is the method listed above the best way to improve the speed of a large table or should they all remain in the same table as splitting may cause other problems later on.
My guess is a far simpler cause. Each row is stored with the fields sequentially in the database. So the primary key and the second field will be stored in the same page, but the primary key and the 190'th field are likely to be on different pages (depending on field sizes). That means twice as much I/O. I'd suggest looking into the page size pragmas as well as the cache size pragmas. If I am right, increasing them should restore your performance (mostly). Roger

