Keyboard shortcuts

Press ← or β†’ to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Research & Analysis

πŸ”¬ Structured thinking for academic research, due diligence, and investigative work.

Research benefits from ReasonKit’s multi-tool approach to ensure comprehensive, unbiased analysis.

Common Research Questions

Literature Review

rk think "What are the key debates in [your field]? What's established vs. contested?" --deep

Methodology Check

rk think "I'm planning to use [methodology] for my study on [topic]. What are the potential weaknesses?" --balanced

Claim Verification

rk think "Paper claims [specific claim]. How robust is this conclusion?" --paranoid

Research Direction

rk think "I want to study [topic]. What angles are underexplored? What might be impactful?" --deep

Example Analysis

Question: β€œI’m reviewing a paper that claims meditation reduces cortisol by 25%. Should I cite it?”

rk think "Paper claims meditation reduces cortisol by 25% (n=45, randomized). Should I cite this finding?" --paranoid
╔════════════════════════════════════════════════════════════╗
β•‘  RESEARCH ANALYSIS: Citation Decision                      β•‘
β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  πŸ’‘ GIGATHINK: Research Perspectives                       β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚  1. METHODOLOGY: Is RCT the right design for this?        β”‚
β”‚  2. SAMPLE SIZE: n=45 is small for physiological claims   β”‚
β”‚  3. EFFECT SIZE: 25% is large - extraordinary claims...   β”‚
β”‚  4. REPLICATION: Has this been replicated?                β”‚
β”‚  5. PUBLICATION BIAS: What about null results?            β”‚
β”‚  6. MEASUREMENT: How was cortisol measured? When?         β”‚
β”‚  7. BLINDING: Can you blind meditation studies?           β”‚
β”‚  8. CONTROL GROUP: What did control do? Placebo effect?   β”‚
β”‚  9. AUTHOR CONFLICTS: Financial interests in meditation?  β”‚
β”‚  10. META-ANALYSES: What do aggregated studies show?      β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  ⚑ LASERLOGIC: Methodological Critique                    β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚  ISSUE 1: Small sample size                               β”‚
β”‚  β†’ n=45 has low statistical power                         β”‚
β”‚  β†’ High risk of false positive (especially if pβ‰ˆ0.05)     β”‚
β”‚  β†’ Effect likely inflated (winner's curse)                β”‚
β”‚                                                            β”‚
β”‚  ISSUE 2: Large effect size suspicious                    β”‚
β”‚  β†’ 25% cortisol reduction is unusually large              β”‚
β”‚  β†’ Larger effect sizes often shrink on replication        β”‚
β”‚  β†’ Compare to meta-analytic estimates                     β”‚
β”‚                                                            β”‚
β”‚  ISSUE 3: Blinding challenges                             β”‚
β”‚  β†’ Participants know if they're meditating                β”‚
β”‚  β†’ Expectation effects can be substantial                 β”‚
β”‚  β†’ Hard to have true placebo in meditation studies        β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  πŸ›‘οΈ PROOFGUARD: Source Verification                       β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚  TO VERIFY:                                                β”‚
β”‚  β€’ Journal impact factor and reputation                   β”‚
β”‚  β€’ Author h-index and prior work                          β”‚
β”‚  β€’ Funding sources                                        β”‚
β”‚  β€’ Replication attempts                                   β”‚
β”‚  β€’ Meta-analysis inclusion                                β”‚
β”‚                                                            β”‚
β”‚  TRIANGULATION NEEDED:                                     β”‚
β”‚  β€’ Find 2+ independent replications                       β”‚
β”‚  β€’ Check meta-analyses for pooled effect size             β”‚
β”‚  β€’ Look for pre-registered studies                        β”‚
β”‚                                                            β”‚
β”‚  VERIFICATION QUESTIONS:                                   β”‚
β”‚  β€’ Is the data publicly available?                        β”‚
β”‚  β€’ Was the study pre-registered?                          β”‚
β”‚  β€’ What do systematic reviews conclude?                   β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  πŸ”₯ BRUTALHONESTY: Academic Reality                        β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚  UNCOMFORTABLE TRUTH:                                      β”‚
β”‚  You want to cite this because it supports your thesis.   β”‚
β”‚  Confirmation bias makes weak evidence look strong.       β”‚
β”‚                                                            β”‚
β”‚  UNCOMFORTABLE TRUTH:                                      β”‚
β”‚  Single studies, especially with small n and large       β”‚
β”‚  effects, rarely replicate. The replication crisis        β”‚
β”‚  exists precisely because of papers like this.            β”‚
β”‚                                                            β”‚
β”‚  HONEST QUESTION:                                          β”‚
β”‚  If this study showed 0% effect, would you still cite it? β”‚
β”‚  If no, you're cherry-picking.                           β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

SYNTHESIS:
Don't cite as standalone evidence. Instead:
1. Find and cite meta-analyses (stronger evidence)
2. If citing, note limitations (small n, needs replication)
3. Use language like "some evidence suggests" not "studies show"
4. Look for pre-registered replications

Research-Specific Profile

[profiles.research]
tools = ["gigathink", "laserlogic", "proofguard", "brutalhonesty"]
gigathink_perspectives = 15
laserlogic_depth = "exhaustive"
proofguard_sources = 5
proofguard_require_citation = true
timeout = 300

Research Quality Checklist

ReasonKit helps verify:

CriterionQuestion
Sample sizeIs n sufficient for claimed effect?
Effect sizeIs it realistic or suspiciously large?
ReplicationHas it been independently replicated?
Pre-registrationWas hypothesis registered before data?
ConflictsAre there financial/ideological conflicts?
Publication biasAre null results published?
MethodologyIs design appropriate for question?

Common Research Biases

BiasHow ReasonKit Helps
Confirmation biasBrutalHonesty challenges your preferences
Publication biasProofGuard asks about null results
Authority biasLaserLogic evaluates arguments, not authors
Recency biasGigaThink includes historical perspectives

Academic Use Cases

Thesis Direction

rk think "My thesis proposal is [X]. Advisor likes it. What's wrong with it?" --deep

Peer Review Preparation

rk think "I'm submitting to [journal]. What will reviewers criticize?" --paranoid

Grant Writing

rk think "My grant proposal claims [X]. How would a skeptical reviewer attack this?" --deep

Debate Preparation

rk think "I'm presenting position [X]. What's the strongest counterargument?" --balanced

Tips for Research Analysis

  1. Include methodology details β€” Design, sample size, statistical approach

  2. Specify the claim precisely β€” Vague claims get vague analysis

  3. Ask for counterarguments β€” β€œWhat’s wrong with this?” is valuable

  4. Use paranoid for citations β€” Avoid citing weak evidence

  5. Run before and after β€” Check assumptions before research, verify conclusions after