Why not just have scripts that echo out the various sets of test
data you are interested in?  That way, Popen would
always be your interface and you wouldn't have to
make two cases in the consumer script.

In other words, make program that outputs test
data just like your main data source program.
Then the consumer would only have to work in one way.







On 3/10/22 04:16, Loris Bennett wrote:
Hi,

I have a command which produces output like the
following:

   Job ID: 9431211
   Cluster: curta
   User/Group: build/staff
   State: COMPLETED (exit code 0)
   Nodes: 1
   Cores per node: 8
   CPU Utilized: 01:30:53
   CPU Efficiency: 83.63% of 01:48:40 core-walltime
   Job Wall-clock time: 00:13:35
   Memory Utilized: 6.45 GB
   Memory Efficiency: 80.68% of 8.00 GB

I want to parse this and am using subprocess.Popen and accessing the
contents via Popen.stdout.  However, for testing purposes I want to save
various possible outputs of the command as text files and use those as
inputs.

What format should I use to pass data to the actual parsing function?

I could in both production and test convert the entire input to a string
and pass the string to the parsing method.

However, I could use something like

    test_input_01 = subprocess.Popen(
         ["cat test_input_01.txt"],
         stdout=subprocess.PIPE,
     )
for the test input and then pass a Popen object to the parsing function.

Any comments on these alternative or suggestions for doing something
completely different?

Cheers,

Loris

--
https://mail.python.org/mailman/listinfo/python-list

Reply via email to