The 12-Week Challenge: A PeopleSoft Modernization Story
Chapter 2: Assessment Day
The emergency team meeting on Friday afternoon had gone about as well as Maya expected, which is to say, poorly.
She’d opened with the truth: the CIO wanted to migrate to SaaS, the decision was effectively made, and she’d bought them twelve weeks to prove there was a better path. Twelve weeks to modernize or start updating resumes.
Jake Morrison, her senior DBA with seventeen years at Riverside, had gone pale. “They want to replace Oracle with what? Workday’s solution? That’s not even a real database.”
Tom Patterson, her application server admin, had laughed. “Twelve weeks? Maya, we can’t even complete a PeopleTools upgrade in twelve weeks. You want us to rebuild our entire operational model?”
Sarah Chen, the junior developer who’d joined the team eight months ago, had been the only one who looked intrigued rather than terrified. “What exactly do we need to prove?”
“That we can operate PeopleSoft like a modern cloud-native platform,” Maya had said. “Infrastructure-as-code. Automated deployments. Real observability. The works.”
“We don’t know how to do any of that,” Tom had said flatly.
“Then we learn,” Maya had replied. “Starting Monday.”
The meeting had ended with grudging acceptance, or at least, no one had quit on the spot. Maya considered that a win.
Now it was Monday morning, 7:00 AM, and Maya stood in Conference Room B with a fresh pot of coffee and a whiteboard that read: “Week 1: Assessment - Know Thyself.”
Her team filtered in: Jake with his Oracle certification mug, Tom with a notebook he’d been using since 2009, Sarah with her laptop covered in Python stickers that suddenly seemed prescient, and the rest of the team, including Lisa (Portal admin), Marcus (Integration Broker specialist), and Priya (Application Designer developer).
“Good morning,” Maya said. “Welcome to Week 1 of the rest of your careers.”
“Too early for inspirational speeches,” Jake muttered into his coffee.
“Fair enough. Here’s the non-inspirational version: we’re going to spend today conducting the most honest assessment of our capabilities that this team has ever done. No ego. No defensiveness. No, ‘but we’ve always done it this way.’ We’re going to document every manual process, every piece of tribal knowledge, every technical debt bomb sitting in our infrastructure. And then we’re going to prioritize what to fix first.”
“This sounds terrible,” Tom said.
“It is,” Maya agreed. “But you know what’s more terrible? Discovering in Week 11 that we missed something critical because we weren’t honest with ourselves in Week 1.”
She divided the whiteboard into five columns: Infrastructure, Deployment, Operations, Skills, and Quick Wins.
“Let’s start with infrastructure,” Maya said. “Tom, walk us through how we currently provision a new PeopleSoft environment.”
Tom flipped open his ancient notebook. “Okay, so first I submit a ticket to the infrastructure team for VMs.”
“Stop,” Maya interrupted. “How long does that take?”
“Depends. If they’re busy, maybe a week?”
“Keep going.”
“Once we get the VMs, I install the OS manually. Then I need Jake to install Oracle.”
“Which takes me two days,” Jake interjected. “Database install, configuration, applying patches, creating tablespaces, setting up backup jobs.”
“All manual?” Maya asked.
“Well, I have a script for some of it,” Jake said defensively. “But yeah, mostly manual. Each environment has its own quirks.”
“What quirks?”
Jake shifted uncomfortably. “You know. Different storage layouts, different memory configurations, different networking setups. The dev database is configured differently from QA, which is different from production.”
“Why?”
“Because… they were built at different times by different people?”
Maya wrote on the whiteboard under Infrastructure: “Environment provisioning: 2-3 weeks, completely manual, no standardization, configuration drift.”
“That seems harsh,” Tom said.
“Is it inaccurate?”
Tom said nothing.
“Okay,” Maya continued. “After Jake installs Oracle, then what?”
“Then I install the application server, process scheduler, web server, and PIA,” Tom said. “That’s another three days if nothing goes wrong.”
“How often does something go wrong?”
“Every single time,” Tom admitted. “Wrong library path, missing dependencies, PeopleTools patches that need to be applied in a specific order that isn’t documented anywhere except in my head.”
Maya added: “App tier installation: 3+ days, failure-prone, zero documentation, tribal knowledge-dependent.”
“When you put it like that, we sound incompetent,” Lisa said quietly.
“You’re not incompetent,” Maya said. “You’re operating in a model that was designed twenty years ago. But that model doesn’t scale, doesn’t automate, and doesn’t survive if Tom gets hit by a bus.”
“Thanks for that image,” Tom said.
“Let’s talk about deployments,” Maya continued, moving to the second column. “Priya, how do we currently move customizations from development to production?”
Priya, who’d been silent until now, spoke carefully. “I make the changes in Application Designer in dev. Test them. Then I create a project. Export the project to a file. Email the file to myself. Copy it to the QA server. Import it. Test again. Then repeat for production.”
“How long does that take?”
“For a small change? Maybe two hours. For a larger project with dependencies? I’ve had deployments take an entire day.”
“What happens if something breaks in production?”
“I… manually roll it back?”
“Do you have automated tests?”
Priya looked at Maya like she’d suggested summoning demons. “Automated tests? For PeopleCode?”
Maya wrote: “Deployment process: 100% manual, no CI/CD, no automated testing, high risk, slow rollback.”
“Operations,” Maya said, moving to the third column. “Marcus, when was the last time we had a production incident?”
Marcus, who managed their Integration Broker and application messaging, checked his phone. “Thursday. IB went down for forty minutes.”
“How did you know it was down?”
“A user called and said their integration wasn’t working.”
“So our monitoring is… end users?”
“We have Tuxedo monitoring,” Marcus said defensively. “It just doesn’t alert properly. And the logs are spread across seventeen different servers. So when something breaks, I SSH into each server, grep through logs, and try to correlate timestamps.”
“How long does incident resolution typically take?”
“If I can find the problem quickly? An hour. But if it’s a weird issue? I’ve spent eight hours troubleshooting before.”
Maya added: “Observability: User-reported incidents, distributed logs with no aggregation, no tracing, slow mean-time-to-resolution.”
The whiteboard was starting to look like an indictment.
“Backups,” Maya said. “Jake, how do we back up the database?”
“RMAN scripts that run nightly,” Jake said. “They’ve been running for… six years? Seven?”
“Have you ever tested a restore?”
Silence.
“Jake.”
“I tested it when I first set it up,” Jake said. “In 2018.”
“So we’re trusting seven-year-old backup scripts that haven’t been validated in seven years to protect our entire university’s enterprise data?”
“When you say it like that...”
“How long would it take to restore from backup if we lost production right now?”
Jake did some mental math. “Database restore from RMAN? Maybe six hours. Then we’d need to rebuild the application tier, reconfigure everything… twelve hours? Eighteen if we hit problems?”
Maya wrote: “Disaster Recovery: 12-18 hour RTO, untested backup procedures, manual recovery process.”
The room had gone very quiet.
“Skills assessment,” Maya said, moving to the fourth column. This was going to hurt. “Show of hands: who has used Git for version control?”
Sarah’s hand went up. No one else’s.
“Infrastructure as code? Terraform, Chef, anything?”
No hands.
“CI/CD pipelines? Jenkins, GitLab CI, GitHub Actions?”
Sarah raised her hand tentatively. “I used GitHub Actions in a personal project once.”
“Container orchestration? Kubernetes, Docker?”
No hands.
“Cloud platforms? GCP, AWS, Azure console?”
No hands.
“Anyone here written Python?”
Sarah’s hand. That was it.
Maya wrote: “Skills: Traditional admin skillset, no DevOps experience, no cloud experience, no automation frameworks, no modern tooling.”
She capped the marker and turned to face her team. Six faces stared back at her with varying expressions of defensive discomfort.
“Okay,” Maya said. “Here’s what I see. We have a team of smart, dedicated people running a critical enterprise platform using operational practices from 2005. We have no automation, no standardization, no observability, no disaster recovery confidence, and no modern technical skills. If we tried to document our processes, half of them exist only in Tom’s and Jake’s heads. If we lost a key team member, we’d be in crisis. And we’re spending most of our time on manual toil instead of improvement.”
“That’s a pretty bleak assessment,” Jake said.
“It’s an honest assessment,” Maya replied. “And here’s the thing, this isn’t unique to us. I’d bet 70% of PeopleSoft shops operate exactly like this. It works, sort of, until it doesn’t. But it’s expensive, risky, and slow. And it’s why consultants can walk in here and say ‘your PeopleSoft operation costs $4.2 million a year’ and make it stick.”
“So what do we do?” Lisa asked.
Maya moved to the fifth column: Quick Wins.
“We prove we can change,” Maya said. “Today, we pick one thing, one small thing, that we can automate or improve in the next week. Something that will give us a victory, build confidence, and demonstrate that this team can evolve.”
“Like what?” Tom asked.
“You tell me,” Maya said. “What’s the most annoying manual process you deal with every day? The thing that makes you think ‘there has to be a better way to do this’?”
Sarah spoke up first. “The quarterly critical patch DPK application process. Every time Oracle releases a patch bundle, we spend three days downloading files, reading the readme, manually applying the patch, checking the logs, and fixing errors.”
“And we have to do it for every environment and every server,” Priya added. “So a single critical patch bundle becomes two weeks of work across dev, QA, and production.”
“Can we automate that?” Maya asked.
“I don’t know,” Sarah said. “I’ve never tried.”
“Then that’s your Week 1 project,” Maya said. “You and Priya. Write a script in Python, Bash, or whatever works best that automates the critical patch download and application process. Document it. Test it in dev. If it works, we’ve just saved ourselves two weeks every quarter.”
Sarah and Priya exchanged glances. Sarah nodded. “Okay. We can try.”
“Jake,” Maya said. “You mentioned your RMAN backups haven’t been tested in seven years. This week, your job is to perform a complete restore test in a non-production environment. Document the actual restore time, identify any problems, and update the runbooks. Make disaster recovery a known quantity instead of a hope.”
Jake grimaced. “That’s going to be tedious.”
“Yes,” Maya agreed. “But when we’re presenting to the CIO in Week 12, and he asks about business continuity, I want to tell him we have a tested, documented DR process with a validated RTO. Can you do that?”
“Yeah,” Jake said reluctantly. “I can do that.”
“Marcus,” Maya said. “Observability. I want you to spend this week researching log aggregation solutions. We need to get all our PeopleSoft logs, including application server, web server, process scheduler, and integration broker, flowing into a single place where we can search them. Start with open-source options. Elasticsearch, OpenSearch, whatever. See what versions are compatible with our PeopleTools versions, and then document what it would take to implement.”
“That sounds complicated,” Marcus said.
“It is,” Maya said. “Which is why you’re starting with research, not implementation. By Friday, I want a one-page proposal that says ‘here’s how we could aggregate our logs, here’s what it would cost, here’s what value we’d get.’ Can you do that?”
Marcus considered. “Yeah. I can do that.”
“Tom,” Maya said. “You’re going to start documenting. Pick one environment, let’s say dev, and document every single configuration setting, every directory path, every environment variable, every tuning parameter. Everything that lives in your head and your notebook needs to live in a wiki or a Git repo by the end of the week.”
“All of it?” Tom looked pained.
“The dev environment,” Maya clarified. “Think of it as the template for the infrastructure-as-code we’re going to build later. But we can’t automate what we can’t describe.”
“This is going to be incredibly boring,” Tom said.
“Welcome to Week 1,” Maya replied.
She turned to Lisa. “Portal admin is cleaner than most of this, but I want you working with Sarah and Priya on the patching automation. Your job is to test it, break it, and make sure it actually works. Be the QA.”
“I can do that,” Lisa said.
Maya stepped back from the whiteboard. “Here’s what success looks like at the end of Week 1: Sarah and Priya have a working patch automation script. Jake has a tested DR runbook with real numbers. Marcus has a log aggregation proposal. Tom has documentation for one complete environment. Lisa has validated that the patch script actually works.”
“That’s… a lot for one week,” Tom said.
“It’s a fraction of what we need to accomplish in twelve weeks,” Maya said. “But it’s a start. And more importantly, it’s how we prove to ourselves that we can do this. Right now, you’re all thinking this is impossible. By Friday, you’ll have evidence that it’s possible.”
“And if we fail?” Jake asked.
“Then we learn why we failed and try again,” Maya said. “But here’s the thing, we’re not trying to be perfect. We’re trying to be better. Any improvement is a win.”
She grabbed the eraser and cleared a space on the whiteboard. “Let me show you where we’re trying to get to.”


