Artefacts

Table of Contents

Source Code

We list in this page a few guidelines and expectations regarding the assessment baseline.

Artefacts of the codebase

  • The code baseline is made of elements called artefacts
  • An artefact is a single file corresponding to a single piece of the application, i.e.:
    • A single program
    • A single control card
    • A single copybook
    • A single database schema
    • Etc.

 

Versions wise

  • Artefacts versions are the ones used to build the application version in production

 

Format wise

  • All artefacts are in native format and native encoding (no charset or encoding is changed): no listing or printing format
  • Export/dump of artefact is done using legacy native tools from the legacy platform
  • No charset or encoding transformation (e.g. EBCDIC encoding is kept)
  • Transfer/offload is binary (not ascii)
  • If the artefacts are provided by the user, we expect all code and database description artefacts to be in a textual format (or in a special formats).

 

Content wise

  • All artefacts contributing to the build or the run of the application in production
  • Scheduler configuration (e.g. OPC, SCL, etc.)
  • Scripts (e.g. JCL, REXX, etc.)
  • Batch scripts (e.g. CMD, BAT)
  • Screens (e.g BMS maps, PROCS).
  • Programs (e.g. COBOL, PL/1, Assembler, Easytrieve, SAS) and copybooks and any of their dependencies
  • Schema and metadata (e.g. LISTCAT) of the persistence (e.g. DB2 DDLs, IMS (MFS, PSB, DBD), VSAM and GDG formats)
  • Parameter and configuration files (such as control cards – CTL or the CICS system definition – CSD file) used to determine execution flow and resolve dynamic calls
  • All existing documentation: naming convention, architecture document, interfaces description, etc.

 

Special Formats

BluInsights can extract a suitable codebase from special formats such as:

  • SAVF format for iSeries applications.
  • PBL format for PowerBuilder applications.

 

Test Data

Depending on the project (customer requirements, legacy technologies, etc.), different types of test cases can be required.

Screen Test Cases:

  • Video recording of all the screens positive and negative cases
    • 1 video with a positive case and 1 video with negative cases, per each test case
    • Video duration should be limited to ~10 minutes per screen
  • Database extractions related to the Screen
    • At start of the recording
    • At the immediate end of the recording

See Test Capture & Replay for TN3270 and TN5250 to see how BluInsights can automate the definition of these test cases.

Batch Test Cases:

  • Inputs for the job execution (configuration, parameters, files…)
  • Outputs generated by the job (files, reports…)
  • Execution log of a single batch job or step (including its duration)
  • Database extractions related to the batch before and after execution

Backend Test Cases:

  • Execution log of a back-end service (including response time)
  • Database extractions related to the backend service before and immediately after execution
  • Recording of backend input and output messages {(Input Message 1, Output Message 1) … (Input Message n, Output Message n)}

Data base Test Cases:

Database extractions for the Tables in related to the POC migration

  • One with a large volume of data
  • One with a smaller volume (fewer records per table) of data per table than a production set data

For Project migration, all the used Tables will be extracted.