This definitely looks like a problem of the code itself,
usually inserts with transactions on every insert are bound to be slower
than bulk transactions.
This is the nature of things. Usually to be able to handle really high volume loads many databases support bulk insert operations one way or the other.

Also which helps if you cannot live without one transaction per dataset,
try to split the data and push it in multithreaded.
That helps to some extent.

But the quickest fix for now is to make multi insert dataset transactions!


Werner


Pinaki Poddar schrieb:
Are you are beginning/committing 1000 transactions to insert 1000 instances?


Chengdong Lu wrote:

My test is to insert 1000 records into the table. The test code is like
this:
 private static void insertJPA() throws DAOException, SQLException {
  DAOFactory daoF = new DAOFactory();
  AccountDao accountDao = daoF.getAccountDao();
  System.out.println(new java.util.Date().toString());

  for (int i = 0; i<1000; i++)
  {

   TransactionProxy tran = accountDao.beginTransaction();
   Account newuser = new Account();
   newuser.setName("user" + i);
   //newuser.setFullname("User No. " + i);
   //newuser.setDescript("Common user");
   accountDao.insert(newuser);
   accountDao.commitTransaction(tran);
  }
 }




Reply via email to