Research & Analysis
π¬ Structured thinking for academic research, due diligence, and investigative work.
Research benefits from ReasonKitβs multi-tool approach to ensure comprehensive, unbiased analysis.
Common Research Questions
Literature Review
rk think "What are the key debates in [your field]? What's established vs. contested?" --deep
Methodology Check
rk think "I'm planning to use [methodology] for my study on [topic]. What are the potential weaknesses?" --balanced
Claim Verification
rk think "Paper claims [specific claim]. How robust is this conclusion?" --paranoid
Research Direction
rk think "I want to study [topic]. What angles are underexplored? What might be impactful?" --deep
Example Analysis
Question: βIβm reviewing a paper that claims meditation reduces cortisol by 25%. Should I cite it?β
rk think "Paper claims meditation reduces cortisol by 25% (n=45, randomized). Should I cite this finding?" --paranoid
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β RESEARCH ANALYSIS: Citation Decision β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β π‘ GIGATHINK: Research Perspectives β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β 1. METHODOLOGY: Is RCT the right design for this? β
β 2. SAMPLE SIZE: n=45 is small for physiological claims β
β 3. EFFECT SIZE: 25% is large - extraordinary claims... β
β 4. REPLICATION: Has this been replicated? β
β 5. PUBLICATION BIAS: What about null results? β
β 6. MEASUREMENT: How was cortisol measured? When? β
β 7. BLINDING: Can you blind meditation studies? β
β 8. CONTROL GROUP: What did control do? Placebo effect? β
β 9. AUTHOR CONFLICTS: Financial interests in meditation? β
β 10. META-ANALYSES: What do aggregated studies show? β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β β‘ LASERLOGIC: Methodological Critique β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β ISSUE 1: Small sample size β
β β n=45 has low statistical power β
β β High risk of false positive (especially if pβ0.05) β
β β Effect likely inflated (winner's curse) β
β β
β ISSUE 2: Large effect size suspicious β
β β 25% cortisol reduction is unusually large β
β β Larger effect sizes often shrink on replication β
β β Compare to meta-analytic estimates β
β β
β ISSUE 3: Blinding challenges β
β β Participants know if they're meditating β
β β Expectation effects can be substantial β
β β Hard to have true placebo in meditation studies β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β π‘οΈ PROOFGUARD: Source Verification β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β TO VERIFY: β
β β’ Journal impact factor and reputation β
β β’ Author h-index and prior work β
β β’ Funding sources β
β β’ Replication attempts β
β β’ Meta-analysis inclusion β
β β
β TRIANGULATION NEEDED: β
β β’ Find 2+ independent replications β
β β’ Check meta-analyses for pooled effect size β
β β’ Look for pre-registered studies β
β β
β VERIFICATION QUESTIONS: β
β β’ Is the data publicly available? β
β β’ Was the study pre-registered? β
β β’ What do systematic reviews conclude? β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β π₯ BRUTALHONESTY: Academic Reality β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β UNCOMFORTABLE TRUTH: β
β You want to cite this because it supports your thesis. β
β Confirmation bias makes weak evidence look strong. β
β β
β UNCOMFORTABLE TRUTH: β
β Single studies, especially with small n and large β
β effects, rarely replicate. The replication crisis β
β exists precisely because of papers like this. β
β β
β HONEST QUESTION: β
β If this study showed 0% effect, would you still cite it? β
β If no, you're cherry-picking. β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
SYNTHESIS:
Don't cite as standalone evidence. Instead:
1. Find and cite meta-analyses (stronger evidence)
2. If citing, note limitations (small n, needs replication)
3. Use language like "some evidence suggests" not "studies show"
4. Look for pre-registered replications
Research-Specific Profile
[profiles.research]
tools = ["gigathink", "laserlogic", "proofguard", "brutalhonesty"]
gigathink_perspectives = 15
laserlogic_depth = "exhaustive"
proofguard_sources = 5
proofguard_require_citation = true
timeout = 300
Research Quality Checklist
ReasonKit helps verify:
| Criterion | Question |
|---|---|
| Sample size | Is n sufficient for claimed effect? |
| Effect size | Is it realistic or suspiciously large? |
| Replication | Has it been independently replicated? |
| Pre-registration | Was hypothesis registered before data? |
| Conflicts | Are there financial/ideological conflicts? |
| Publication bias | Are null results published? |
| Methodology | Is design appropriate for question? |
Common Research Biases
| Bias | How ReasonKit Helps |
|---|---|
| Confirmation bias | BrutalHonesty challenges your preferences |
| Publication bias | ProofGuard asks about null results |
| Authority bias | LaserLogic evaluates arguments, not authors |
| Recency bias | GigaThink includes historical perspectives |
Academic Use Cases
Thesis Direction
rk think "My thesis proposal is [X]. Advisor likes it. What's wrong with it?" --deep
Peer Review Preparation
rk think "I'm submitting to [journal]. What will reviewers criticize?" --paranoid
Grant Writing
rk think "My grant proposal claims [X]. How would a skeptical reviewer attack this?" --deep
Debate Preparation
rk think "I'm presenting position [X]. What's the strongest counterargument?" --balanced
Tips for Research Analysis
-
Include methodology details β Design, sample size, statistical approach
-
Specify the claim precisely β Vague claims get vague analysis
-
Ask for counterarguments β βWhatβs wrong with this?β is valuable
-
Use paranoid for citations β Avoid citing weak evidence
-
Run before and after β Check assumptions before research, verify conclusions after