Thanks all!

I wasn't familiar with using curl at the command line at all, but I did try
a basic curl yesterday based on this thread, admin console attribute
syntax, and the tutorial in solr documentation (
https://lucene.apache.org/solr/guide/8_4/solr-tutorial.html) and was able
to produce the file. I basically did what Steve Ge suggested, the command
looks kind of like this for anyone else who needs it in the future:

curl "
http://servername.com:8983/solr/collection1/select?indent=on&q=*:*&rows=5000&wt=json";
> collection1_index.json

I just set the rows to the number in our index, which I got from the admin
console.

Amanda

------
Dr. Amanda Shuman
Researcher and Lecturer, Institute of Chinese Studies, University of
Freiburg
Coordinator for the MA program in Modern China Studies
Database Administrator, The Maoist Legacy <https://maoistlegacy.de/>
PhD, University of California, Santa Cruz
http://www.amandashuman.net/
http://www.prchistoryresources.org/
Office: +49 (0) 761 203 96748



On Wed, Jan 29, 2020 at 4:21 PM Emir Arnautović <
emir.arnauto...@sematext.com> wrote:

> Hi Amanda,
> I assume that you have all the fields stored so you will be able to export
> full document.
>
> Several thousands records should not be too much to use regular start+rows
> to paginate results, but the proper way of doing that would be to use
> cursors. Adjust page size to avoid creating huge responses and you can use
> curl or some similar tool to avoid using admin console. I did a quick
> search and there are several blog posts with scripts that does what you
> need.
>
> HTH,
> Emir
>
> --
> Monitoring - Log Management - Alerting - Anomaly Detection
> Solr & Elasticsearch Consulting Support Training - http://sematext.com/
>
>
>
> > On 29 Jan 2020, at 15:43, Amanda Shuman <amanda.shu...@gmail.com> wrote:
> >
> > Dear all:
> >
> > I've been asked to produce a JSON file of our index so it can be combined
> > and indexed with other records. (We run solr 5.3.1 on this project; we're
> > not going to upgrade, in part because funding has ended.) The index has
> > several thousand rows, but nothing too drastic. Unfortunately, this is
> too
> > much to handle for a simple query dump from the admin console. I tried to
> > follow instructions related to running /export directly but I guess the
> > export handler isn't installed. I tried to divide the query into rows,
> but
> > after a certain amount it freezes, and it also freezes when I try to
> limit
> > rows (e.g., rows 501-551 freezes the console). Is there any other way to
> > export the index short of having to install the export handler
> considering
> > we're not working on this project anyone?
> >
> > Thanks,
> > Amanda
> >
> > ------
> > Dr. Amanda Shuman
> > Researcher and Lecturer, Institute of Chinese Studies, University of
> > Freiburg
> > Coordinator for the MA program in Modern China Studies
> > Database Administrator, The Maoist Legacy <https://maoistlegacy.de/>
> > PhD, University of California, Santa Cruz
> > http://www.amandashuman.net/
> > http://www.prchistoryresources.org/
> > Office: +49 (0) 761 203 96748
>
>

Reply via email to