Close

Platform Service Transformation: Entry 2 – Platform Architecture and ReDesign: Phase One

ArchitecturalView  

Platform Architecture and ReDesign: Phase One


2nd Entry

The purpose of this project is to unbind and unfetter a great service and product from debilitating, confusing and circuitous code. As in many cases, the code for products, is a result of many years of different code practices, developers, loss of methods in favor of “quck-fixes” to support Production. The result is a hairy, overgrown, complex, and code with nothing but dead ends in its future (in other words, non-scalable, with too many hard coded restrictions)

The Challenge is to re-design and construct a new infrastructure that IS scalable, flexible and elegant, without the User Experience changing. It may be improved, but for a more intuitive work flow’s sake, retaining the functionality on which the customer base is dependent.

The Tool Stack being used in current project for code re-factoring and re-design:
In order to transform a platform riddled with inefficient code, and work flow paths, we are consolidating DB Calls and Posts, using the following to create new service based middleware (to replace the PHP assignments): Cloud based iterations for DEV, DEVOPS, R&D, Ruby, Java, RAML for new APIs, ElastiSearch, Kibana, Jenkins, Cucumber, PHP (unraveling and re-assigning), Version One, Apache, Oracle, customized code generation, common sense, and Top-Down development and sensible deliveries for each sprint. Each of the teams (2 – Dev, UI/Biz) own their parts, and there are also intersects between the team functions for each team member. Some more than others.

CloudArchitectureMap
Example: NOT Actual

While we have spent a good bit of time re-engineering the product, we have realized, that we are limited in our demos to reflect the present level of functionality based upon the existing product. In order to fully service the needed functionality at the needed scale, we will begin development from the Top Down, rather than the discovery based Bottom Up approach. The Bottom Up approach has been useful in revealing many of the flaws, design complexities and inefficiencies, and workflows. This realization also provides the reality of NOT developing certain functional sections such as Security – already complex, into a flawed model. Once the present demo is completed to demo for execs, the functionality based upon the Bottom Up approach will be retained, but developed from the Top Down.

This shift in approach will allow implementation of complex features from a fresh start, designed to scale, and efficient.

Next Entry coming soon!

Advertisements

Vendor Management Service Transformation: Entry 1 – Re-Factoring, Businss Architecture

metodo_pratiche_agile_chart_manifesto_itaEntry 1    3.22.2015

I recently was invited to join a project for a Vendor Management Service (VMS) in Mid-March 2015. The project is to provide in Phase 1 a re-factoring of our Client’s code by replacing the hardcoded middleware with services, and adding new client facing features, along with a new UI. All needing new documentation, of which there is now verylittle.

Our client provides a turnkey service for managing IT vendors who need to outsource their HR, Recruiting, Accounting, and Financial Services for this aspect of their business.

My role is to document the present legacy Business Processes, the new Processes, the new Services and the newly re-factored APIs, processes and added features by providing the Requirements, Use Cases, Workflows and Processes.

The leadership on this project is not only setting the pace, but shining a bright light into the future vision for this client, and for the VMS industry. It is a privilege to work with them.

 

Presently, I am awash in the project ramp-up and assimilation of the many layers, features and infrastructure required to successfully launch a program as complex as this.

We have two teams: one is onsite with FTE EEs of the customer, and a fly-in contingent of our leadership. The other is an offsite team in Atlanta, that is providing an AGILE based component for delivery of the new code which provides the new Service APIs and integration; as well as Leadership, Business Architecure, Process Articulation & Documentation. The client will observe the present SDLC based approach for now.

We have defined the primary users and their roles, the features – both new and old – associated with their roles. The functionality of these features some of which, for now, will remain as legacy, while others are new. There are around 400 of these. Some are Epic, requiring some of the features to support the workflows.

For the new and replacement pieces (in AGILE) we have defined the primary “Day in the Life” from the “need to the ass in the seat” E2E process to establish a critical Happy Path. Variations and UCM will be modeled based upon this primary structure.

The software and coding will be the same, albeit updated. The specific usage of the system will vary based upon the needs and systems of client-users of this system.

The SDLC pieces for things like the DATA, and QA will be driven from the client sites.

I will be updating this log at various points along the way…. so STAY TUNED!!

Bill Fulbright

Words That Sell Software Testing

Here is a very helpful article by Simon Knight, who has done his homework on powerful words that can help you in your  career:

Words That Sell Software Testing
by Simon Knight

Some time ago I decided to re-write my About Me page so as to incorporate some lessons learnt from research into sales, marketing and in particular – copywriting. While doing so it made sense to look for words that would lend weight to the message I wanted to convey. I turned to the book Words That Sell for inspiration and as a result, developed my lists of Words That Sell Software Testing below:

Technical words that dazzle the listener or reader with the cutting-edge possibilities of a product or service:

Powerful
Functionality
Performance
Transforms
Maximises
High-capacity
High-performance
Advanced
Sets the standard

Cerebral words that appeal to the head and that carry a tone of maturity and competence:

Assurance
Collaborative
Continuous
Control
Effective
Essential
Integral
Investigate
Logical
Read More »

Internet of Things – The new User Interface – Do we need new test tools?

Internet of Things – The new User Interface – Do we need new test tools?

Director Test Strategy and Consulting

JwristPADust wondering. Internet of Things will be massive. Wearable devices for Health, Medicine, Communication, Entertainment, Functional Workplace Applications, etc. There are as many applications under development and those we haven’t seen, that will challenge the test methodology we use to test our present systems and environments.

Imagine the test required for a brain wave synchronizer, being driven by an application and data residing in the cloud, that will capture the experiential responses as well as govern them for the user. The uses in this case are vast. Relaxation, Accelerated Learning, Medical monitoring of Brain Wave activity, treatment of ADHD, transmission of said data to and from subscribers, etc. I can imagine the Test Strategy Document, Test Plan, the Lab work, the logistics and Planning. Test resources with the skills to run the full gamut of tests? This was a product I developed back in the 80’s, but I was the testing guinea pig!!

intel-wearable-featWe will need to step it up, to keep up with the variety and depth of new applications. Creative thinking, innovative approaches to capturing the device dynamics, and reporting those as metrics… I think it is a very exciting time, and we will see this explosion happen over the next 15 years. It is inevitable.

You might want to consider: What does this mean to you? How will you remain relevant? Does this mean your present skills are already obsolete, or that you will have to learn something new (I certainly hope so!)

Let me know how and why you think this will impact your testing career!

Bill

BPM Testing – Get the Best Bang for your Buck!

QA2100-BPMTesting-16MP

Interested in getting the best bang for your buck with BPM Product Design, Development, Strategy, Testing, and Implementation?

Need a lift?  We can help!

Give us the opportunity to provide you with our assessments.  We have USA resources, and fully experienced offshore capacities for development, testing and delivery.

We have lived it for over 8 years and provided some of the finest products in the Insurance and Banking Industries.

Ccropped-facebookcover.jpgontact Bill Fulbright
Company: QA 2100 Test Strategy and Consulting
Website: http://qa2100.com
Email: bill@qa2100.com

BPM Testing in Today’s Market – QA 2100’s Testing BPM Testing Toolkits

QA 2100’s BPM Open Source and Web Service Testing Toolkits

The behavior of many BPM service based applications are governed by business process and workflows which are defined by business rules. These business rules must be validated during application testing. For many firms, testing business rules is a costly and complicated process which involves business users and testers. QA 2100 has invested in state-of-the-art automated BPM test methods and tools integrated by Pega Systems into PegaRULES Process Commander®
(PRPC) V.X and Test Management Framework, Bonita, and other opensource BPM products. Within the framework of PRPC, and BPM products is a process of design which utilizes not only business process, but a Requirements Definition tool which clarifies the Requirement process. This process turns use cases based on requirements into design, thus providing fundamental testing paths for automated testing of the BPM framework. This allows you to develop an application using a design based upon business rules, use cases, best practice development and quality principles.

Automated business rules and workflow validations can lower your testing time by 95%

Test Automation Using QA 2100 BPM Testing Toolkits
QA 2100 takes business rules validation testing one step further by automating the creation of test scripts using parameterized data and automating the execution of test cases. For example, QA 2100’s accelerator can execute 65 rule validations in 1.1 minutes using automation, versus 32.5 hours for manual execution. We use the Automated Unit Testing functionality within Pega PRPC to help you build a series of test cases to satisfy test requirements defined by the business requirements and use cases. These test cases are the foundation for automated test scripts. Automated test scripts can be built to pass from workflow to workflow, thus describing a partial or complete path through the application for scenario or end-to-end testing.

With the use of Test Management Framework (TMF), and other Test Repository tools, use case steps and parameters as described within the automated test scripts can be satisfied using the Scenarios and Suites features. The Scenarios and Suites test the behavior of the application and verify compliance with the original requirements. Besides providing significant savings in cost, time and efforts, automation lets you run many more tests during your testing process as a suite to provide hands off BPM Testing results.

Boundary Testing
QA 2100 provides boundary or negative testing of the business rules in the BPM framework and process to confirm the effectiveness of rule sets by requesting conditions that don’t exist. This helps ensure the business rules engine returns the correct value or an appropriate error. These boundary tests are set up as part of the actual application within each workflow.
QA 2100 has experience with automated tools to accelerate testing and improve accuracy
Employing automation tools to test and validate business rules adds breadth and depth to your testing efforts. By using pre-defined testing parameters, hands-off automation methodologies, and innovative solutions, you can accelerate and simplify a complex process.

XMLServiceTestToolExecutionTiming

32.5 hours to perform 65 rules tests manually
1.15 minutes to perform 65 rules tests using automation

Read More »

8 Key Factors for Cloud Delivery: Eight CIO Recommendations

To thrive in today’s swift-changing and unforgiving marketplace, companies need accessible, agile and adaptable IT.   Flexible service delivery is the answer.   Here’s how to employ it.   In the post-PC era, IT decision makers have a choice to make: Stay with the platform that got them here? Adopt a private or public cloud? Perhaps IT as-a-service or a mix of all the above?

CIO_STATS_mapping_the_enablers

Whatever you decide, a move away from rigid IT infrastructures is a move in the right direction. According to a recent McKinsey & Company survey, more than half of surveyed officers cited the switch to flexible service delivery as a top priority

The reason is simple: Flexible delivery is more adaptable and costs less. It’s a smarter way to distribute IT to users.

We recently confirmed this with a large U.S.-based telecommunications client that needed its technology to scale to tens of millions of subscribers on demand and then let those same subscribers pick and choose online the services most important to them.

Originally, the company considered a traditional infrastructure and then briefly a public cloud. But after completing a holistic analysis, we determined an internal private cloud would be the best option for three reasons: 1) It met the client’s needs (i.e. time-to-market, minimal downtime, continuity and rapid scalability requirements); and 2) It was less expensive over time when compared with the public cloud; and 3) It left open the option for later incorporating a public cloud if desired.

The beta test verified that the model enabled rapid elasticity, continuity and increased uptime. Mission accomplished.

You can achieve the same results. But only after you consider the following eight recommendations we’ve used to transition clients to flexible service delivery:

1.  Determine your biggest pain point.
No one’s going to say “no” to faster time to market, enhanced computing flexibility, improved performance, tighter security and better service at a lower cost. While all of those areas can certainly be addressed through flexible service delivery, the model you choose will largely depend on your top pain point. In other words, you’ll need to honestly answer the following: What keeps you up at night?

2.  Decide which functions to shift.
Next, you’ll need to designate which functions and processes to switch to flexible service delivery. This entails a “core vs. context” analysis, in which you distinguish business activities that provide you with a competitive advantage from those that should be offloaded to external providers. Remember, what was previously considered a core activity is now often viewed a contextual one, including (but not limited to) network management, tech support, performance measurement and financial planning.

3.  Start with a low-risk pilot program.
For established companies, it’s usually best to dip your toes into flexible delivery before diving in headfirst. You can achieve this by developing a pilot program for non-production processes, back-office functions or anything else that has lower performance requirements and less impact on end-users, such as testing and development. In some cases, however, it might make sense to start with customer-facing applications if there’s a pressing need to market new products.

4.  Identify “chatty” applications.
To get the most for your money, you’ll need to determine the consumption levels of your CPU, memory, network and disk storage. In doing so, you’ll be able to identify “chatty” applications that sometimes incur surprising surcharges between the cloud provider and the business. The more you know, the more you save.

5.  Mind those non-IT bottlenecks.
Once IT has been upgraded to the equivalent of a 12-lane highway with the help of flexible delivery, other parts of the organization cannot remain in horse-and-buggy mode. Well, they can. But they’ll become a bottleneck to your business. The challenge is to optimize the entire enterprise so that other areas don’t hold up the software lifecycle. To do this, you’ll need to educate and update your company culture.

6.  Compare service requirements.
Buyer beware: Most public cloud providers offer standard service level agreements that cannot be customized according to client needs. In some cases, general service levels may be sufficient for a development environment. But they often fall short of the demands of a production environment. To find a service level best suited to you, you’ll need to know the difference.

7.  Check security qualifications.
Security is a top-of-mind consideration, particularly for applications and systems that handle personal information. To ensure your data is protected, always certify a provider’s security qualifications. And know that cloud providers typically conduct security audits at a more intensive level than companies hosting internal private clouds.

8.  Demand transparency with a daily dashboard.
To manage variable costs, you’ll want to monitor your capacity and all of its operational parameters — straight down to the lowest server — with a daily dashboard. With this level of transparency, you can make day-to-day decisions about the level of CPU, memory and storage required and use metrics and trends to make decisions about future capacity financial modeling.

Admittedly, the changes required to move to a flexible service delivery can seem overwhelming. But by following the above advice, you’ll put yourself in a better position to find your own success.

For more information, read our white paper on Flexible Service Delivery (pdf), get inspired by our enabler series on The Future of Work or visit Cognizant Business Consulting.

Uncover the key factors to consider before designing a flexible service delivery model. Read on to find out more:

http://www.cognizant.com/latest-thinking/perspectives/Pages/finding-flexible-delivery-success-eight-cio-recommendations.aspx#.VE1ilqK-f8R.twitter

Back to top
%d bloggers like this: