Hi all,

And the newer version not only polls in a given interval, but also lets you 
define a condition on the PLC when a fetch operation should be triggered.
Unfortunately this is the part where the S7 dependency comes in ... the version 
in the current develop branch and the first next-gen-core branch would only 
allow this with S7 connections.
This is the part I had to disable with the nex-gen-core-2 branch and which was 
the main reason for me to suggest to rewrite the scraper.

Chris

Am 13.01.20, 09:20 schrieb "Julian Feinauer" <[email protected]>:

    Simply said, ist exactly that.
    From a simple Configuration (programmatic or yml files) it just frequently 
fetches data and forwards it to a given handler.
    And in the background we have all necessary boilerplate, connection pool, 
... .
    
    Julian
    
    Am 13.01.20, 07:43 schrieb "Álvaro Del Castillo" 
<[email protected]>:
    
        Morning!
        
        So I think if we make a clear concept and just reqrite it, it should
        > be pretty easy and get more robust.
        > Under the hood its not much more than a Connection Pool and a
        > ScheduledExecutor, right?
        
        Any place where I can read what the Scraper goals are apart from
        reading the code?
        
        From this comment, I imagine that it is a tool to collect the data from
        different PLCs in a scheduled way. 
        
        Cheers!
        
        
        > 
        > Best!
        > Julian
        > 
        > Am 12.01.20, 13:32 schrieb "Lukas Ott" <[email protected]>:
        > 
        >     Hi,
        >     
        >     +1 for rewriting the scraper. In my humble opinion the PLC4X
        > project still
        >     aims for multiple language support and the scraper including the
        >     integration into calcite, kafka and logstash are core
        > capabilities that
        >     should be supported.
        >     
        >     Lukas
        >     
        >     Am So., 12. Jan. 2020 um 12:51 Uhr schrieb Christofer Dutz <
        >     [email protected]>:
        >     
        >     > Hi all,
        >     >
        >     > for about 7 full days have I been cleaning up the new branch in
        > order to
        >     > port all the other drivers besides the S7 to the new API …
        >     > This forced me to go through about all modules we have.
        >     >
        >     > One module that however causes me to worry is the scraper. It’s
        > a core
        >     > module we use in the calcite-integration, kafka-connect and the
        > logstash
        >     > module.
        >     > The last two ones are some that are gaining pretty much
        > traction.
        >     >
        >     > However I feel very uncomfortable having dug into the current
        > scraper … so
        >     > I would propose to completely rewrite it. I tried refactoring
        > it for 1,5
        >     > days and just recently simply reverted my changes … currently
        > I’m just
        >     > trying to get things to build again.
        >     >
        >     > As Julian told me he and his company have sort of moved away
        > from the
        >     > scraper to something new … I would like to discuss the
        > alternatives with
        >     > you.
        >     >
        >     > Right now for me it feels impossible to provide support if
        > anything goes
        >     > wrong in the scraper.
        >     >
        >     > Chris
        >     >
        >     >
        >     >
        >     
        > 
        
        
    
    

Reply via email to