Would it be difficult?
Added by Alan Jesser about 7 years ago
Right now I have two parts of an ETL program. The front-end is done using Lapis (Lua/OpenResty) and the back-end I created a web server using Poco. The program runs multiple conversion processes at a time from different vendors, each conversion has a unique database structure, so class mapping with a DBO doesn't really work. I have a custom library setup for this dynamic setup to interface with the PostgreSQL server. Each conversion has its own database and is created as a separate thread that runs until we flag it as no longer needed from the front-end. The front-end is also used to configure a new conversion process, which is handled by the back-end. I don't really have any issues with this encapsulated method of handling the conversion class/object. But the back-end will do all the work and then return JSON through a REST service.
From what I remember in Wt, it's been a few years since I actively used it, the widgets and what not are generally tied into what you're doing. While my current setup can handle dynamic front-end setup based on the JSON returned, I'm not sure how much of a hassle that would be to implement with Wt. I'm also not sure if I could switch from my custom database library to the Wt DBO because of how different the database setup is for each conversion object.
Replies (3)
RE: Would it be difficult? - Added by Koen Deforche about 7 years ago
Hey,
(I had to lookup ETL, we always learn)
I'm not sure what you setup you have in mind with the JSON: is the JSON being interpreted in browser-side JavaScript or in server-side Lua?
Anyway, although the widgets do not require you to layer your functionality in UI / model, it is commonly done so for anything but simple applications. You could have the logic in separate model classes that are then used from within a thin UI layer, and these model classes could also be talking to other processes or (REST) services. For interfacing with a REST service, Wt comes with an (asynchronous) Wt::Http::Client and there's also basic JSON parsing / serialization support --- but you can use other implementations as well.
As to Wt::Dbo: it should be possible to map it to most relational models, at least that was the motivation to add the flexibility that currently is there, but SQL schemas can always have esoteric features. If so, we'll be happy to know and suggest (or implement) solutions.
RE: Would it be difficult? - Added by Alan Jesser about 7 years ago
To better explain, all the logic is done on the back-end with my custom web server. When the back-end sends data to the front-end it's in a JSON format, post data from the front-end to the REST service is in JSON or simple query strings. It's either simple responses, or it contains the mapping information the front-end needs to setup the UI. For example, vendor A has a schema with 150 columns, each varying data types and names. Vendor B could have 200 columns of varying data types and names. Because of that a new database is created for each vendor. So we can't really have create a class map unless we create one for each vendor every time we do a new conversion. That's not really conducive. It also means I maintain multiple connections to multiple databases.
On our end the schema usually doesn't change but in the past year we've had enough demand for changes that our schema has been changing more than I like. Thus a class mapping would work most of the time but handling it dynamically is easier. My database class emulates tuples because it was created before I got a server with a C++11/14 compiler. So it's really a thin layer over the database C library but it's abstracted out enough that it works between our DB2 and PG databases.
If I could do everything in Wt I wouldn't need to return JSON because I can keep it all within the application and use proper classes and containers. The REST server wouldn't need to be there either since that is just there so I can have a web UI for the program.
The main concern I have is reading about how sessions are handled, WApplication is created and destroyed on a per session basis. The conversion object needs persistence because after the user has setup the mappings they will run it. At that point you just walk away and depending on the data set, it could take a few hours to process. There are also instances in which the process is run automatically at regular intervals.
It boils down to there needing to be a persistent object, that runs in its own thread, to handle a conversion and there is a web UI to do the setup, configuration, checking the status, etc.
RE: Would it be difficult? - Added by Alan Jesser about 7 years ago
I answered my own question by digging deeper into the examples and API. My jumping point was using the Blog example to see how to create a custom entry point. From there I just instantiate a shared_ptr of the conversion object and pass it around. Not much different than how my custom web server currently handles it.