# solution **Repository Path**: pikzas/solution ## Basic Information - **Project Name**: solution - **Description**: solution - **Primary Language**: Java - **License**: MulanPSL-2.0 - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2021-07-07 - **Last Updated**: 2021-07-09 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # solution #### Task description This is a programming test assignment for candidates. You must work on your own, and submit your work within 3 days. The whole solution must be your original work. If there is anything unclear to you and is not mentioned in the assignment, you can make your own reasonable assumption and complete your work. You can mention your assumption in the README.md file. You should provide a README.md with instructions to execute the solution and a brief description on the decisions made for the solution. What we are looking for: * Functional correctness (The solution works) * Usage of Java Stream API (java.util.stream) (just to clear any doubt that we don't want Kafka Stream) * Expandability of the design * The submitted code is tested, with test steps and test result included in the README.md file Story As a securities settlement company in a simple market, I want to read the trade records from customer and capture and enrich the trade details, so that the trades can be sent to other downstream services for further processing. Acceptance Criteria * Trade Capture Service: * Monitor and read text files containing an array of JSON objects (e.g. trade records) in a specific location * Read the JSON objects inside the file and transform them into another JSON data format (see example below) * Publish the JSON objects to a Kafka topic for downstream services * Downstream Feed Service 1: * Every 5 minute, read the data from the Kafka topic, output the data into a new text file in a specific location for service A * Downstream Feed Service 2: * Every 1 minute, read the data from the Kafka topic, output the data into ANOTHER new text file in ANOTHER specific location for service B Example of input file: ```json [ { "tradeReference" : "T00001" , "accountNumber" : "1300218277" , "stockCode" : "KO" , "quantity" : "288.120000" , "currency" : "USD" , "price" : "1234.56" , "broker" : "B00123" } , { ... (another JSON record) } , ... (more JSON records) ] ``` Example of output file: > Note: Amount = Price * Quantity (round up). Received timestamp is server time in UTC offset. ```json [ { "tradeReference" : "T00001" , "accountNumber" : "1300218277" , "stockCode" : "KO" , "quantity" : "288.120000" , "currency" : "USD" , "price" : "1234.56" , "broker" : "B00123" , "amount" : "355701.43" , "receivedTimestamp" : "2019-01-01T08:00:01.120000" } , { ... (another JSON record) } , ... (more JSON records) ] ``` #### Functional Points 0. App trigger by Crons or run in loop mode 1. Read file from target path 2. consider the file is big and can not load app memory in one time, we can use fileChannel read part by part. 3. read json array from the splitted file; 4. convert the json string to java beans 5. apply handler for each bean in the list 6. publish data via kafka 7. BACKUP FILE 8. listen the kafka topic and handle data in target frequency. 9. write the data into target path.