> From: artemerv...@gmail.com
> Date: Thu, 13 Oct 2016 00:49:50 -0400
> Subject: Re: HL7 messages to Kafka consumer
> To: users@kafka.apache.org
>
> Nifi HL7 processor is built using HAPI API, which supports z-segments
> http://hl7api.sourceforge.net/xref/ca/uhn/hl7v2/examples/CustomModelClasses.html
MG>lets go back to the spec so we are on the same page:
Z segments contain clinical or patient data that the HL7 Standard may not have
defined in other areas. Essentially, it is the “catch all” for data that does
not fit into the HL7 Standard message definitions. Z segments can be inserted
in ANY message at any time, and Z segments can carry ANY data you want. In HL7
messaging, all Z segments within it start with the letter “Z”.
http://healthstandards.com/blog/2006/10/05/what-are-z-segments/
so in your above example your segment has a predefined class which which must
be known a-priori and already codedThis class contains 2
datatypes:ca.uhn.hl7v2.model.v25.datatype.NM; //Numeric
Stringca.uhn.hl7v2.model.v25.datatype.ST; //Single String the spec says Z
segments can carry ANY data you want.
so what happens when health-care facility wants a Zsegment which contains
backup doctor contact info segment?right now http://hl7api.sourceforge.net/
contains no
"backup-doctor-contact-info-segment"http://hl7api.sourceforge.net/v25/apidocs/ca/uhn/hl7v2/model/v25/datatype/package-frame.htmlif
I do add a new Backup-Doctor-Contact-Info Segment which does not conform to
your schema
I will be thrown HL7Exception because implemented interface Group is :
an abstraction representing >1 message parts which may repeated together.
* An implementation of Group should enforce contraints about on the contents of
the group
* and throw an exception if an attempt is made to add a Structure that the
Group instance
* does not
recognize.http://hl7api.sourceforge.net/xref/ca/uhn/hl7v2/model/Group.htmlany
implication that all segments are predefined (even Z-segments) does not conform
with the above statements and will not automatically guarantee the asserts from
HL7api are met
MG>this has little to do with kafka so lets take this offline>
> On Wed, Oct 12, 2016 at 10:10 PM, Martin Gainty <mgai...@hotmail.com> wrote:
>
> >
> >
> >
> > > From: dbis...@gmail.com
> > > Date: Wed, 12 Oct 2016 20:42:04 -0400
> > > Subject: RE: HL7 messages to Kafka consumer
> > > To: users@kafka.apache.org
> > >
> > > I did it with HAPI API and Kafka producer way back when and it worked
> > well.
> > > Times have changed, If you consider using Apache Nifi, besides native HL7
> > > processor,
> > MG>since this is where i get 99% of the applications i work on I have to
> > ask will Nifi process Z segments?
> > MG>if Nifi does not not process Z segments you might want to delay being
> > a Nifi evangelist and go with
> > MG>aforementioned solution
> > you can push to Kafka by dragging a processor on canvas. HL7
> > > processor also is built on HAPI API. Here's an example but instead of
> > Kafka
> > > it's pushing to Solr, replacing solr processor with Kafka will do a
> > trick.
> > MG>kafka server.properties does support a zk provider so kafka server can
> > ingest resultset(s) from zk
> > ############################# Zookeeper #############################
> > # Zookeeper connection string (see zookeeper docs for details).# This is a
> > comma separated host:port pairs, each corresponding to a zk# server. e.g. "
> > 127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".# You can also append an
> > optional chroot string to the urls to specify the# root directory for all
> > kafka znodes.
> > zookeeper.connect=localhost:2181
> > # Timeout in ms for connecting to zookeeper
> > zookeeper.connection.timeout.ms=6000
> > MG>kafkas clear advantage over zk is to control flow by pausing or
> > resuming partitions to your kafka consumer
> > MG>possible side-effect of relying only on zks provider would disable this
> > control-flow capability of kafka
> > > Old and new consumer API is available.
> > >
> > > https://community.hortonworks.com/articles/20318/visualize-
> > patients-complaints-to-their-doctors-usi.html
> > >
> > > On Oct 12, 2016 4:33 PM, "Martin Gainty" <mgai...@hotmail.com> wrote:
> > >
> > > > provisionally accomplished task by embedding A01,A03 and A08 HL7
> > > > Event-types into SOAP 1.2 Envelopes
> > > > I remember having difficulty transporting over a non-dedicated
> > transport
> > > > such as what Kafka implements
> > > > Producer Embeds Fragment1 into SOAPEnvelope
> > > > Producer Sends Fragment1-SOAPEnvelope of A01
> > > > Consumer pulls Fragment1 of A01 from SOAP1.2 Body and places
> > SOAPEnvelope
> > > > into cache
> > > > Consumer quiesces connection presumably so other SOAP 1.2 messages can
> > be
> > > > transported
> > > > Consumer re-activates connection when sufficient bandwidth
> > detected(higher
> > > > priirity SOAP1.2 envelopes have been transmitted)
> > > > Producer Embed Fragment2 into SOAPEnvelope
> > > >
> > > > Producer Sends Fragment2-SOAPEnvelope of A01
> > > > Consumer pulls Fragment2 of A01 from SOAP1.2Body and places into cache
> > > > When Consumer detects EOT Consumer aggregates n Fragments from cache to
> > > > all-inclusive A01 event
> > > > Consumer parses A01 to segments
> > > > Consumer parses attributes of each segment
> > > > Consumer insert(s)/update(s) segment-attribute(s) into database
> > > > Consumer displays updated individual segment-attributes to UI and or
> > > > displays inserted segment-attributes to UI
> > > >
> > > > Clear?Martin
> > > > ______________________________________________
> > > >
> > > >
> > > >
> > > > > From: samglo...@cloudera.com
> > > > > Date: Wed, 12 Oct 2016 09:22:32 -0500
> > > > > Subject: HL7 messages to Kafka consumer
> > > > > To: users@kafka.apache.org
> > > > >
> > > > > Has anyone done this? I'm working with medical hospital company
> > that
> > > > > wants to ingest HL7 messages into Kafka cluster, topics.
> > > > >
> > > > > Any guidance appreciated.
> > > > >
> > > > > --
> > > > > *Sam Glover*
> > > > > Solutions Architect
> > > > >
> > > > > *M* 512.550.5363 samglo...@cloudera.com
> > > > > 515 Congress Ave, Suite 1212 | Austin, TX | 78701
> > > > > Celebrating a decade of community accomplishments
> > > > > cloudera.com/hadoop10
> > > > > #hadoop10
> > > >
> >
> >