Skip to content
+
Section 14

Supplier Questionnaire

geniant's responses to VolunteerNow's vendor qualification questionnaire, answered in full.

The following are geniant's responses to all questions in VolunteerNow's Supplier Questionnaire, organized by the four evaluation sections. Cross-references to the main proposal are provided throughout.

1. Qualifications and Experience

Q1.1.

Describe your firm's profile, including total employee count, years in business, and the specific office location(s) supporting this account. Detail your proposed engagement model (remote/hybrid/in-person) and how your team will collaborate effectively with VolunteerNow stakeholders.

geniant is a global experience consulting firm founded in 2022, headquartered in Dallas, TX — approximately 11 miles from VolunteerNow's office. We employ 70+ specialists across strategy, experience design, and technology delivery, with studios in Dallas, San Francisco, and London. For VOLY Next Gen, the primary delivery team will operate out of our Dallas studio.

Engagement Model: We propose a hybrid model designed around VolunteerNow's working style, not ours. During Phase 1 (Discovery & Design), we will conduct on-site workshops at VolunteerNow's offices — stakeholder interviews, journey mapping sessions, and design reviews benefit enormously from in-person collaboration. During build phases, the team operates remotely with a structured weekly cadence: daily standups, weekly working demos, bi-weekly steering committee reviews, and continuous access to a shared staging environment.

Our Dallas proximity is a genuine operational advantage. We can be on-site within the hour for critical decision points, UAT sessions, or go-live support — without the scheduling friction of cross-timezone or cross-country coordination.

See Section 2 for full company profile and Section 7 for team structure.

Q1.2.

Describe your experience within the last five years delivering custom-built technology platforms for K-12, nonprofits, or public-sector entities. Include the scope, objectives, and outcomes for at least one project that integrated complex operational workflows with background screening or safety requirements.

The most direct answer to this question in our portfolio is backgroundchecks.com, where geniant served as the full-stack technology partner for over five years.

backgroundchecks.com (5+ year full-stack partnership): geniant owned strategy, experience design, engineering, and platform operations for a consumer-facing background screening platform. The scope included: FCRA-compliant screening order workflows, identity verification integrations, adverse action processing, multi-tier user account management, high-availability infrastructure for time-sensitive background check requests, and sensitive PII data governance at volume. We maintained platform reliability across peak demand periods and continuously evolved the product. This is not an adjacent credential — it is the exact domain VOLY Next Gen requires.

The direct translation to VOLY Next Gen: our team has already designed and built FCRA-compliant screening workflows, integrated with background check data providers, handled adverse action and consent management, and maintained the security infrastructure required for sensitive screening data. The veriFYI and JDP integrations in this proposal are grounded in firsthand operational experience, not research.

Additional relevant experience: Charles Schwab (16-year partnership, complex multi-role digital workflows and regulated financial data), Federal Reserve (compliance-intensive multi-stakeholder platform, SOC 2-aligned security), AJ Gallagher (AI-augmented platform delivery, 70–85% efficiency gains in 17 weeks), and Delta Air Lines (consumer-facing platform serving millions of users across diverse device and accessibility requirements).

See Section 12 for full case studies.

Q1.3.

Identify key personnel assigned to this project (roles and qualifications). Specify the ratio of design staff to software engineers and clarify whether the work is performed by W-2 employees or subcontractors.

The VOLY Next Gen core team will be staffed from full-time W-2 geniant employees. Our three named key personnel are:

  • Luke Keith — Engagement Lead & Strategy: 12+ years of platform strategy, product design, and engineering leadership. Directs overall vision, architecture decisions, and ensures VOLY Next Gen's UX and technical foundation align with VolunteerNow's long-term mission.
  • Melanie Gotz — Project Manager: 15+ multi-phase platform modernizations at geniant. Primary point of contact for day-to-day execution, steering committee chair, and delivery accountability.
  • Chris Byler — Lead Architect: geniant's Lead Architect specializing in cloud-native architecture, data security, and AI-augmented workflows. Responsible for VOLY Next Gen's system design, veriFYI integration architecture, and FERPA/COPPA compliance infrastructure.

The broader delivery team includes: Lead Experience Designer, UI Designer, Lead Front-End Engineer, Senior Front-End Engineer, Senior Back-End Engineers (2), QA/Test Lead, and DevOps/Cloud Engineer — totaling 10–11 team members at peak staffing.

Design-to-Engineering Ratio: 2 design staff to 4–5 engineers during the core build phase (approximately 1:2). This ratio reflects industry best practice for high-stakes platform work — strong design direction throughout build phases prevents costly rework.

Employment Structure: 95% W-2 geniant employees. We may engage 1–2 vetted contractor specialists (e.g., K-12 compliance subject matter experts or background check integration specialists) operating under geniant's standard NDAs and security protocols.

See Section 7 for full team profiles and allocation details.

2. Technical Approach, Strategic Thinking and Roadmap Development

Q2.1.

Using a past example of a multi-phase platform rebuild, explain your end-to-end delivery methodology (e.g., Agile, Waterfall). How do you sequence discovery, design, build, and launch while defining and measuring success at each milestone?

Our delivery methodology is Spec-Driven Development (SDD) — an agile methodology enhanced by AI-augmented workflows that geniant has refined across multiple enterprise platform modernizations. In our AJ Gallagher engagement, SDD compressed a projected 6-month delivery into 17 weeks while producing documentation of higher quality than the client had ever received.

Phase Sequencing: We run five phases, each with defined entry/exit criteria: (1) Discovery & Design — stakeholder interviews, journey maps, architecture specification, UX wireframes, and a prioritized feature backlog as exit criteria; (2) Foundation Build — working cloud infrastructure, data models, authentication, and a functional application shell; (3) Core Platform Development — incremental working software delivered every 2 weeks, with VolunteerNow staff accessing the staging environment continuously; (4) Integrations & Advanced Features — third-party connectors, analytics, and AI foundations; (5) Testing, Migration & Launch — full QA, data migration, staged rollout.

Measuring Success: Each sprint produces a working demo. Each phase concludes with a formal sign-off against a pre-agreed definition of done. SDD specifications serve as the audit trail — every feature's acceptance criteria is documented before a line of code is written.

See Sections 6 and 8 for methodology and timeline details.

Q2.2.

Identify the top three risks you encounter in platform rebuilds (technical, operational, or adoption-related). How do you align executive leadership and end-users to ensure timely decision-making and risk mitigation?

Risk 1 — Scope Creep During Build: The most common derailment in platform rebuilds is undisciplined scope growth once stakeholders see working software. Our mitigation: SDD specifications lock scope at the sprint level. Change requests are welcomed but formally triaged — estimated, prioritized, and scheduled — never silently absorbed.

Risk 2 — Data Migration Surprises: Legacy data quality issues discovered late in a project are catastrophic to timelines. We conduct a full data audit in Phase 1, not Phase 5. Every anomaly (duplicates, orphaned records, malformed fields) is documented and addressed before migration scripts are written.

Risk 3 — Stakeholder Decision Latency: Platform rebuilds stall when decisions escalate without resolution. Our governance structure prevents this: a designated Project Sponsor with decision-making authority, a bi-weekly steering committee with a pre-published agenda, and a documented decision log so no open item persists across more than one sprint.

End-User Alignment: We involve agency representatives and volunteers in usability testing during Phase 1 and Phase 3 — not as afterthoughts, but as scheduled inputs to the design process. Adoption risk drops significantly when users see their feedback reflected in the product before launch.

Q2.3.

Describe your technical approach to designing modular architecture (veriFYI) preserving it as an independent platform while enabling deep integration with a single, integrated solution (VOLY).

veriFYI is architected as an independent SaaS platform with its own data model, API surface, authentication layer, and deployment lifecycle — completely decoupled from VOLY at the infrastructure level. VOLY integrates with veriFYI exclusively through a versioned, documented API contract. This means veriFYI can be updated, scaled, or extended without requiring VOLY changes, and vice versa.

Integration Pattern: VOLY communicates with veriFYI via RESTful API calls for background check initiation, status polling, and result retrieval. veriFYI handles all vendor routing (JDP and future providers), result normalization, FCRA compliance logic, adverse action workflows, and audit trail. VOLY stores only the outcome and timestamp — not the raw screening data — keeping PII exposure minimized on the VOLY side.

Why independence matters: If VolunteerNow wishes to offer veriFYI as a standalone product to other organizations — or integrate it with platforms beyond VOLY — the architecture supports this without re-engineering. The integration boundary is clean, documented, and owned by VolunteerNow.

See Section 4 for the full veriFYI architecture and integration diagram.

Q2.4.

How do you handle multi-tenant architecture and standardization vs. client-level configuration?

VOLY Next Gen is built on a multi-tenant architecture with a hierarchical configuration model: Platform (VolunteerNow global) → District → Campus/Organization → Agency. Each tier can inherit configuration from the tier above or override it within defined bounds.

Standardization vs. Configuration: Core platform behavior (data model, security controls, compliance workflows) is standardized and non-overridable at the tenant level — this protects data integrity and compliance guarantees. Configurable elements (branding, notification templates, custom fields, reporting views, approval workflows) are surfaced to appropriate administrative roles. Districts can configure their own settings; campuses can configure within the bounds the district allows; agencies operate within campus bounds.

This model gives VolunteerNow the ability to enforce platform-wide standards while giving districts and agencies the autonomy they need to manage their own programs — without requiring VolunteerNow staff to manage every configuration change manually.

Q2.5.

Describe the database schema strategy that supports both a shared 'District' view and strictly isolated 'Campus' views. How do you prevent 'data seepage' between autonomous campus entities?

We use a hybrid tenancy model: a single shared database with row-level tenant isolation enforced at the application and query layer, combined with logical schema separation for sensitive campus-level data.

District Roll-Up View: Aggregate reporting tables at the district level are populated via controlled ETL processes — campuses push anonymized, consent-gated data upward. District administrators see roll-up metrics (volunteer hours, participation rates, program outcomes) without accessing individual campus volunteer records.

Campus Isolation: Campus-level volunteer data (student PII, individual hour logs, background check outcomes) is stored with campus_id as a mandatory partition key on every table. Application-layer middleware enforces that every query includes a campus scope — queries that attempt cross-campus access are rejected at the ORM level, not just by convention. This is enforced in automated tests.

Data Seepage Prevention: Row-level security policies at the database level provide a second enforcement layer independent of application code. Regular penetration testing and data isolation audits are part of our standard release process.

Q2.6.

Explain your strategy for delivering a mobile-optimized, web-based platform utilizing human-centered design. How do you ensure the interface remains accessible for users in resource-constrained environments (e.g., older devices or low-bandwidth areas)?

Mobile-first is not a feature — it is the default design posture for VOLY Next Gen. Every screen is designed at mobile dimensions first and progressively enhanced for tablet and desktop. The volunteer-facing experience is conceived as a mobile web app with native-like interaction patterns; administrative experiences receive full desktop optimization.

Human-Centered Design Process: We conduct user research with real volunteers and agency staff during Phase 1 — not assumptions from a brief. Journey maps, usability testing, and iterative prototype reviews with actual users drive the design. Our Concept Prototypes (Section 15) represent the initial output of this thinking.

Resource-Constrained Environments: We apply progressive enhancement principles — core functionality works on older devices and slower connections; richer interactions load conditionally. Specific techniques include: lazy image loading, code splitting to minimize initial payload, service worker caching for repeat visits, and graceful degradation when JavaScript is limited. We target a Lighthouse performance score of 90+ on mobile.

See Section 5 for full experience design approach.

Q2.7.

Address your approach to WCAG 2.2 compliance and multi-language support (specifically Spanish).

VOLY Next Gen will meet WCAG 2.2 Level AA — exceeding the RFP's stated 2.1 requirement. This is a commitment, not an aspiration: accessibility testing is integrated into our sprint definition of done, not treated as a final-phase audit.

WCAG 2.2 AA Implementation: We use automated accessibility scanning (axe-core) in CI/CD pipelines to catch regressions on every merge. Manual screen reader testing (NVDA, VoiceOver) is conducted on each major flow. We maintain a VPAT (Voluntary Product Accessibility Template) and will update it at each major release.

Spanish Language Support: We implement i18n (internationalization) from the ground up using a standard framework (react-i18next or equivalent). All user-facing strings are externalized to translation files from day one — not retrofitted. Spanish is the initial second language, and the architecture supports adding additional languages without code changes. We recommend engaging a professional translation service for the initial Spanish content and building a community-contribution model for ongoing maintenance.

See Section 5 for full accessibility and design approach.

Q2.8.

How do you enable 'approved research use cases' while protecting sensitive data? Describe your methodology for data masking or anonymization when providing data exports for third-party evaluation.

Research access to VOLY data is governed by a formal data request workflow built into the platform's administrative layer. Approved research use cases flow through: request submission → institutional review → data steward approval → scoped export generation. No researcher receives raw production data.

Anonymization Methodology: Exports for research use apply k-anonymity principles: direct identifiers (names, email addresses, student IDs) are removed; quasi-identifiers (zip codes, age ranges) are generalized to prevent re-identification; sensitive fields (background check outcomes, medical accommodations) are suppressed entirely. Exports are generated as one-time files logged to the audit trail with the requesting researcher's identity, intended use, and approval chain.

Student Data: Student-level data exports require additional consent gating under FERPA — parental consent (for minors) or student consent (18+) must be on file before any individual-level student record is included in a research export, even in anonymized form.

Q2.9.

Detail your data architecture strategy. How will you enable district-level roll-up reporting and campus-level autonomy while ensuring privacy, consent, and data governance specifically for student-level data (e.g., FERPA, COPPA)?

Student data is treated as the highest-sensitivity data class in the VOLY Next Gen architecture. FERPA and COPPA protections are enforced at the data model level, not the UI level — meaning restrictions apply regardless of which interface accesses the data.

FERPA Compliance: Student education records are stored with strict access controls: only school officials with a legitimate educational interest can access individual student records. API endpoints serving student data require both authentication and purpose-scoped authorization tokens. All student data access is logged to an immutable audit trail.

COPPA Protections: For volunteers under 13, parental consent is a mandatory prerequisite to account creation. Consent records are stored with timestamps and IP addresses. The platform will not collect behavioral data on COPPA-protected users beyond what is necessary for volunteer activity tracking.

District Roll-Up vs. Campus Autonomy: District reporting aggregates data across campuses using consent-gated, anonymized metrics. Campus administrators see their own students' data in full; district administrators see aggregate counts and outcomes. No cross-campus individual record access is possible at any administrative tier below VolunteerNow staff level.

Q2.1.

Describe your approach to data encryption (at rest/in transit), regular vulnerability scanning, and third-party penetration testing. How do you manage PII and ensure SOC2/HIPAA level data residency requirements?

Encryption: All data at rest is encrypted using AES-256. All data in transit uses TLS 1.2+ with HSTS enforced. Database backups are encrypted with keys managed in a dedicated KMS (AWS KMS or Azure Key Vault). Encryption keys are rotated on a defined schedule.

Vulnerability Scanning: Automated SAST (static application security testing) runs on every pull request. Dependency scanning (Dependabot or equivalent) monitors for CVEs in third-party libraries with auto-remediation for non-breaking updates. Infrastructure vulnerability scanning runs on a weekly schedule with findings triaged by severity.

Penetration Testing: We conduct third-party penetration testing at the conclusion of Phase 5 (pre-launch) and annually thereafter. Findings are remediated before go-live; critical findings block launch.

Data Residency: All data is stored in US-based cloud regions (AWS us-east-1 primary, us-west-2 secondary for DR). No data is replicated to non-US regions. SOC 2 Type II alignment is maintained through access control, logging, and change management practices.

See Section 10 for comprehensive security details.

Q2.11.

Describe your policy on the use of LLMs or AI-assisted coding tools during development. How do you ensure that proprietary VolunteerNow logic or sensitive data is not ingested into public models?

geniant uses AI-assisted development tools as a core part of our SDD methodology. Our policy is explicit, enforced, and documented:

Approved Tools: AI coding assistants (GitHub Copilot Enterprise, Claude for work) configured with data-off / zero-retention modes — meaning no prompts or completions are used to train public models. We use enterprise-tier agreements with all AI tool vendors specifically to guarantee this.

What is never sent to AI tools: Production data, PII, VolunteerNow's proprietary business logic, database credentials, API keys, or any content tagged as confidential in the project data classification scheme. Engineers are trained on these restrictions as part of project onboarding, and tooling configurations enforce them.

Audit Trail: AI tool usage is logged at the team level. geniant's engineering lead reviews AI-generated code for quality, security, and IP compliance before it is committed. No AI-generated code is committed without human review.

See Section 4 for the full AI/LLM policy.

Q2.12.

Based on your understanding of VolunteerNow's mission, provide a conceptual 3-year roadmap. Include your approach to introducing AI capabilities (e.g., matching volunteers to opportunities) and handling 'Technical Debt' as the platform scales.

Year 1 (Launch & Stabilize): VOLY Next Gen goes live July 1, 2027 with the core platform: volunteer registration and profiles, agency portal, opportunity discovery, check-in/out, hour tracking, background check integration (veriFYI/JDP), student volunteerism MLP, Congressional Award tracking, and district/campus analytics. The architecture is AI-ready — clean data models, event-driven logging — but AI features are intentionally deferred until the data foundation matures.

Year 2 (Expand & Automate): Introduce efficiency AI: AI-assisted opportunity matching (recommendations based on volunteer history and preferences), predictive engagement scoring (identifying volunteers at risk of churn), and automated outreach personalization. Expand to new markets using the multi-tenant configuration model. Launch Spanish-language experience to full production quality.

Year 3 (Intelligence & Scale): Full AI-enabled matching and engagement engine. District-level impact analytics with predictive modeling. Corporate partner self-service portal. API program enabling third-party ecosystem integrations.

Technical Debt Management: SDD's living specification model prevents the primary source of debt — undocumented decisions. We dedicate 15–20% of each sprint to refactoring and infrastructure hardening. Debt is tracked in the same backlog as features, with a defined ratio preventing it from being indefinitely deferred.

Q2.13.

Provide your phased roadmap for introducing AI capabilities within the platform. Specifically, how do you differentiate between 'efficiency AI' (internal dev tools) and 'functional AI' (user-facing features like opportunity matching or engagement prediction)?

We differentiate AI by who benefits and what data it touches:

Efficiency AI (development-phase): AI tools that accelerate geniant's engineering process — code generation, test writing, documentation drafting. These tools operate on code and specifications, never on VolunteerNow's production data or user PII. They are active from day one of the engagement and governed by geniant's AI tool policy (see 2.11).

Functional AI Phase 1 — Recommendation (Year 2): Volunteer-opportunity matching based on past participation history, stated interests, and location. This requires 12+ months of clean production data to train against. Matching models are trained on anonymized behavioral data; no PII is used in model training. Outputs are presented as suggestions, not mandates.

Functional AI Phase 2 — Prediction (Year 2–3): Engagement prediction models identifying volunteers likely to lapse, agencies at risk of program failure, and optimal communication timing. These models are built on aggregate platform signals and audited for demographic bias before deployment.

Functional AI Phase 3 — Personalization (Year 3): Fully personalized volunteer journeys — customized opportunity feeds, proactive nudges, impact summaries tailored to individual volunteer history. VolunteerNow retains full control over AI feature rollout and can disable any functional AI component independently.

Q2.14.

How do you audit AI algorithms for bias, particularly regarding demographic reporting and volunteer eligibility?

Every functional AI model deployed in VOLY Next Gen goes through a bias audit before production deployment and on a defined review schedule thereafter.

Bias Audit Process: We evaluate model outputs across demographic dimensions available in the data (age, zip code, language preference) to identify disparate impact patterns. If a matching algorithm systematically underserves volunteers from specific zip codes or over-recommends certain opportunity types to specific demographic groups, that is a model failure — not a feature.

Volunteer Eligibility: Eligibility decisions (background check outcomes, age requirements, training prerequisites) are never delegated to AI models. These are deterministic, rule-based workflows with documented logic and full audit trails. AI is used for suggestions and personalization; eligibility is always a human-readable rule.

Transparency: VolunteerNow staff have access to model performance dashboards showing recommendation distribution across demographic segments. Audits are documented and available for review. We recommend an annual third-party AI audit as the platform's AI capabilities mature.

Q2.15.

Provide examples of integrations with third-party systems (SIS, Identity systems, screening providers). Disclose any proprietary third-party libraries that would require ongoing licensing fees not managed directly by VolunteerNow.

Background Screening: JDP and veriFYI via REST API integration. veriFYI is the primary abstraction layer; JDP is a veriFYI-managed provider. Future screening vendors can be added through veriFYI without VOLY code changes.

Identity & Authentication: OAuth 2.0 / OIDC-compliant SSO supporting Google, Microsoft (for district/school staff), and standard email/password. SAML integration available for districts with enterprise identity providers.

Student Information Systems (SIS): Integration with common K-12 SIS platforms (PowerSchool, Infinite Campus) via Clever or direct API where available. Used for rostering, enrollment verification, and academic credit reporting.

Communications: Twilio (SMS), SendGrid (email). Both are industry-standard, VolunteerNow-managed accounts with no geniant licensing dependency.

Third-Party Licensing Disclosure: All open-source libraries will use permissive licenses (MIT, Apache 2.0, BSD) — no GPL. A Software Bill of Materials (SBOM) will be delivered at project handoff. Any commercial library requiring ongoing licensing will be disclosed during Phase 1 architecture review, with VolunteerNow's approval required before inclusion. Our current architecture plan contains no proprietary library dependencies that would create vendor lock-in for VolunteerNow.

See Section 4 for the full technology stack and dependency table.

Q2.16.

Confirm that VolunteerNow will have direct access to the source code repository (e.g., GitHub/Azure DevOps) throughout the lifecycle. Describe the hosting environment (AWS/Azure/GCP) and VolunteerNow's level of access to that environment.

Confirmed. VolunteerNow will be the primary owner of the source code repository from contract execution — not granted access at project end. geniant engineers have write access during development, which is formally revoked at project handoff. VolunteerNow can hire internal engineers or engage other vendors at any point without geniant's approval or licensing restrictions.

Hosting Environment: AWS (recommended) or Azure, depending on VolunteerNow's existing enterprise agreements. Primary region: us-east-1 (Virginia) for data residency and latency optimization. DR region: us-west-2 (Oregon).

VolunteerNow's Access Level: All infrastructure is deployed to a cloud account owned by VolunteerNow — not geniant. geniant has delegated administrative access during development (revoked post-launch). VolunteerNow has full owner-level access to all environments throughout the engagement. We will never deploy infrastructure that VolunteerNow cannot independently inspect, modify, or hand off.

See Section 4 for the full source code and cloud environment policy.

Q2.17.

In a multi-phase rebuild, how do you balance 'speed to market' for each phase with the prevention of long-term technical debt? What percentage of each sprint is typically dedicated to refactoring or infrastructure hardening?

Speed and quality are not a trade-off in SDD — they are aligned by design. Because specifications are written before code, engineers are not guessing at requirements, backtracking on misunderstandings, or producing throwaway prototypes. This eliminates the primary source of technical debt in traditional Agile projects.

Sprint Allocation: We target 15–20% of each sprint for refactoring, infrastructure hardening, and debt remediation. This is tracked in the same backlog as features — it is not discretionary. When debt accumulates beyond a defined threshold, we schedule a dedicated hardening sprint before proceeding to new features.

Architecture Review Gates: Each phase transition includes an architecture review where the Lead Architect evaluates accumulated technical decisions against the long-term roadmap. Decisions that create future constraints are flagged, documented, and either remediated or accepted with explicit VolunteerNow sign-off.

Speed Mechanisms: SDD's AI-augmented workflows compress implementation time 40–60% compared to traditional approaches — so the time recaptured by AI assistance is reinvested in quality and testing, not just earlier delivery dates.

Q2.18.

Describe your approach to developing actionable, phased roadmaps that clearly sequence discovery, build, testing, launch, and ongoing support.

Every geniant engagement begins with a roadmap artifact produced during Phase 1: a phased delivery plan with explicit entry/exit criteria, milestone definitions, dependency mapping, and a backward-scheduled timeline from the fixed go-live date (July 1, 2027 for VOLY Next Gen).

The roadmap is a living document — updated at each phase gate and shared with VolunteerNow's steering committee. It is not a Gantt chart produced once and ignored. When scope changes, the roadmap is updated before work proceeds, so VolunteerNow always has a current view of what is committed, what is in progress, and what is at risk.

For VOLY Next Gen specifically, our five-phase roadmap is detailed in Section 8 with exact deliverables per phase, milestone table, and contingency planning for the July 1, 2027 deadline.

See Section 8 for the full phased roadmap.

3. Ongoing Support, Maintenance & Operational Readiness

Q3.1.

How do you structure delivery, decision-making, and ongoing support for organizations (like VolunteerNow) that do not maintain internal technology teams?

Our hybrid engagement model is explicitly designed for organizations without internal IT teams — this is not an accommodation, it is our standard operating model for nonprofit and mid-market clients.

During Delivery: geniant handles all infrastructure, deployment, and technical operations. VolunteerNow provides domain expertise and business decisions. The only technical resource we require from VolunteerNow is a part-time Technical Liaison (10–15 hrs/week) for data access and third-party coordination — not a developer.

Decision-Making Structure: A designated Project Sponsor (executive authority), a weekly working session with the Project Manager, and a bi-weekly Steering Committee with a pre-published agenda. Decisions are logged and resolved within one sprint — no open items drift indefinitely.

Post-Launch Support: geniant offers tiered support retainers that include infrastructure monitoring, incident response, security updates, and enhancement sprints — all managed by geniant without requiring VolunteerNow to hire technical staff. Full details are in the Cost Proposal.

See Section 11 for full support model details.

Q3.2.

Describe your approach to security updates, compliance monitoring, accessibility remediation, and performance optimization.

Security Updates: Critical CVEs are patched within 24 hours of disclosure. High-severity patches within 72 hours. Routine dependency updates are batched in weekly maintenance windows with automated regression testing before deployment.

Compliance Monitoring: FERPA/COPPA compliance controls are monitored through automated audit log review and quarterly manual audits. SOC 2 alignment is maintained through continuous access control review, change management logging, and annual third-party assessment.

Accessibility Remediation: Automated accessibility scanning runs on every deployment. WCAG regressions are treated as bugs — not cosmetic issues — and remediated within the next sprint. Annual manual accessibility audits with an external auditor are included in our recommended support retainer.

Performance Optimization: Application performance monitoring (APM) runs continuously in production. We set and enforce performance budgets (page load targets, API response time SLAs) and alert on degradation before users notice it.

Q3.3.

Define your specific RTO and RPO for a total cloud region failure. Describe your backup frequency, verification process, and documented 'Rollback' procedures in the event of a release failure.

RTO (Recovery Time Objective): 4 hours for a total primary region failure, with full application restoration in the secondary region (us-west-2).

RPO (Recovery Point Objective): 1 hour — database backups and transaction log shipping occur hourly. Maximum data loss in a catastrophic failure is 60 minutes of transactions.

Backup Process: Automated daily full snapshots with continuous incremental backups. Backups are stored in a separate AWS account with cross-region replication. Restore tests are performed monthly — we verify backups can actually be restored, not just that the backup job completed.

Release Rollback: Every production deployment uses Blue/Green infrastructure — the previous version remains live until the new version is verified healthy. If automated health checks fail post-deployment, traffic is routed back to the previous version within minutes. Manual rollback can be triggered by any engineer on the on-call rotation. Rollback procedures are documented, tested in staging, and reviewed annually.

See Section 10 for full disaster recovery details.

Q3.4.

How do you ensure reliability during peak usage (e.g., start of school year)? Detail your use of automated testing (unit/integration), typical code coverage percentages, and the use of 'Blue/Green' deployments or 'Feature Flags.'

Peak Load Preparation: We identify anticipated peak periods (start of school year, major volunteer drives, corporate partner events) during Phase 1 and build load testing scenarios against those profiles. Load tests are run against staging before each major release and before known peak periods. Auto-scaling policies are configured and verified — not assumed.

Automated Testing: Our CI/CD pipeline enforces: unit test coverage target of 80%+ on business logic; integration tests covering all API endpoints and critical user flows; end-to-end tests for the five highest-traffic workflows (volunteer registration, opportunity sign-up, check-in/out, hour submission, background check initiation). Tests run on every pull request — builds that reduce coverage below threshold are blocked.

Blue/Green Deployments: All production releases use Blue/Green — zero-downtime deployments with instant rollback capability (see 3.3).

Feature Flags: New features are deployed behind feature flags, allowing controlled rollout to subsets of users (staff testing, pilot agencies, percentage-based rollout) before full release. This decouples deployment from release, meaning code can be deployed to production safely before it is visible to users.

Q3.5.

Describe your support model for an organization without a dedicated internal IT team. Include staffing levels, escalation paths, and defined service levels (SLA) for response times.

Post-launch, geniant offers tiered support retainers designed for organizations without internal technical staff. Our standard SLA tiers:

  • Critical (platform down / data integrity risk): 1-hour response, 4-hour resolution target, 24/7 on-call coverage
  • High (major feature impaired, significant user impact): 4-hour response, next business day resolution target
  • Medium (minor feature impaired, workaround available): 1 business day response, 5-day resolution target
  • Low (cosmetic, enhancement, question): 2 business day response, scheduled in next sprint

Escalation Path: All support requests go to a dedicated support inbox monitored by the geniant operations team. Critical issues auto-escalate to the on-call engineer within 15 minutes of acknowledgment. VolunteerNow has a named escalation contact (Melanie Gotz) for any issue that needs executive attention.

Staffing: Post-launch support retainers are staffed by the same team that built the platform — not handed to a separate support organization. Institutional knowledge is retained.

See Section 11 for full support tier details. Pricing in the Cost Proposal.

Q3.6.

How do you handle 'minor enhancements' (e.g., form field changes) and system tuning? Describe your approach to documentation, knowledge transfer and training to ensure long-term sustainability after the initial engagement ends.

Minor Enhancements: Configuration changes (form fields, notification templates, lookup values) that can be made through the admin interface are documented and trained so VolunteerNow staff can make them independently. Code-level changes (new fields, workflow modifications) are handled in enhancement sprints under a support retainer.

Documentation: SDD's living specification model means documentation is a first-class output of development, not an afterthought. At project handoff, VolunteerNow receives: complete SDD specification repository (every feature's acceptance criteria and design rationale), architecture documentation (system diagrams, data model, integration contracts), operational runbooks (deployment procedures, incident response playbooks, backup verification steps), and a Software Bill of Materials.

Knowledge Transfer: Formal knowledge transfer is built into Phase 5 — not a single handoff meeting. We conduct role-specific training sessions (administrators, agency staff, VolunteerNow operations) and record training videos for asynchronous reference.

Long-Term Sustainability: Our explicit goal is for VolunteerNow to be able to operate VOLY Next Gen independently after the engagement. We build to hand off — not to create dependency.

See Section 11 for the full knowledge transfer and training plan.

4. Alignment with VolunteerNow Mission & Values

Q4.1.

VolunteerNow views this platform as civic infrastructure. How does your firm proactively propose improvements over a 3-to-5-year horizon? Provide an example of how you helped a long-term partner pivot as community needs changed based on community-centered design feedback.

We share VolunteerNow's view entirely: VOLY is not a SaaS subscription — it is civic infrastructure. The standard by which we should be held is not “did the features ship?” but “did volunteer engagement in this community improve?” That requires a partnership posture, not a vendor posture.

Proactive Improvement Model: We propose a formal annual platform review — separate from operational support — where geniant presents VolunteerNow's leadership with a forward-looking assessment: emerging technology capabilities relevant to volunteer management, usage data patterns suggesting new feature opportunities, and community feedback themes from agency and volunteer users. This is built into our long-term engagement model as a standing deliverable, not something that happens only when VolunteerNow asks.

Long-Term Partnership Example: Our 16-year partnership with Charles Schwab is the clearest demonstration of this posture. Over that engagement, we helped Schwab navigate multiple major pivots — mobile-first transition, post-merger integration, accessibility overhaul, and AI capability introduction — each driven by a combination of market signals and user feedback rather than technology novelty. The relationship endured because we consistently prioritized Schwab's users' needs over technology trends.

See Sections 12 and 13 for partnership philosophy and case studies.

Q4.2.

Describe your philosophy on 'Responsible Data Stewardship.' How do you ensure data collection preserves public trust and supports community equity beyond basic technical security?

Responsible data stewardship means collecting only what serves the user, being transparent about what is collected and why, and actively protecting communities who cannot protect themselves — including minors and under-resourced volunteers who may not have the digital literacy to evaluate data risk on their own behalf.

Data Minimization: We collect what is required for platform function and volunteer safety. We do not build behavioral profiles beyond what is necessary for matching and engagement. We do not sell or share data with third parties outside of explicit VolunteerNow-approved integrations.

Community Equity: Our accessibility and Spanish-language commitments are the most direct expression of equity in the platform. A platform that only works well for English-speaking users with modern devices is not serving VolunteerNow's full community. We design for the margins first — if it works for the most constrained user, it works for everyone.

Transparency: Privacy notices are written in plain language, not legal boilerplate. Users understand what data is collected and can request deletion. For student volunteers, FERPA and COPPA rights are surfaced proactively — not buried in terms of service.

See Section 13 for geniant's full responsible data stewardship philosophy.

Q4.3.

What specific resources or engagement levels will you require from VolunteerNow to successfully deliver this effort? Describe your collaboration model, including on-site discovery and workshops.

We are explicit about what we need from VolunteerNow because ambiguity here derails projects. Our requirements:

  • Project Sponsor (Executive): Decision-making authority on scope, budget, and prioritization. Available bi-weekly for Steering Committee; reachable within 24 hours for escalations.
  • Domain Expert (Full-time equivalent): Staff member with deep knowledge of current VOLY workflows, agency relationships, and volunteer patterns. Available for Phase 1 interviews, ongoing backlog prioritization, and UAT sign-off.
  • Technical Liaison (10–15 hrs/week): Point of contact for infrastructure access, data questions, and third-party vendor coordination. Does not need to be a developer.
  • Agency/Volunteer Representatives (Phase 1): 3–5 agency coordinators and 3–5 volunteers available for 2-hour usability testing sessions during Discovery. Scheduling coordinated by the Domain Expert.
  • Data Access: Read access to the current VOLY database for audit and migration planning. No write access required from VolunteerNow's side during this phase.

Total estimated VolunteerNow resource commitment: approximately 1.5 FTE across the engagement duration, concentrated in Phase 1 and UAT.

On-Site Discovery: We will conduct on-site in Dallas for Phase 1 workshops — stakeholder interviews, journey mapping, and design co-creation sessions. We prefer two consecutive on-site days per workshop block rather than frequent single-day visits. Remote collaboration tools (Figma, Notion, Slack, video) handle all between-session work.

+